〈Google's Colossus Makes Search Real-Time by Dumping MapReduce〉, 《High Scalability》 (월드 와이드 웹 log), 2010년 9월 11일.
wikipedia.org
en.wikipedia.org
Carr2 2006: ‘Despite having published details on technologies like the Google File System, Google has not released the software as open source and shows little interest in selling it. The only way it is available to another enterprise is in embedded form—if you buy a high-end version of the Google Search Appliance, one that is delivered as a rack of servers, you get Google's technology for managing that cluster as part of the package’ harv error: 대상 없음: CITEREFCarr22006 (help)
Carr3 2006: ‘All this analysis requires a lot of storage. Even back at Stanford, the Web document repository alone was up to 148 gigabytes, reduced to 54 gigabytes through file compression, and the total storage required, including the indexes and link database, was about 109 gigabytes. That may not sound like much today, when you can buy a Dell laptop with a 120-gigabyte hard drive, but in the late 1990s commodity PC hard drives maxed out at about 10 gigabytes.’ harv error: 대상 없음: CITEREFCarr32006 (help)
Carr4 2006: ‘To cope with these demands, Page and Brin developed a virtual file system that treated the hard drives on multiple computers as one big pool of storage. They called it BigFiles. Rather than save a file to a particular computer, they would save it to BigFiles, which in turn would locate an available chunk of disk space on one of the computers in the server cluster and give the file to that computer to store, while keeping track of which files were stored on which computer. This was the start of what essentially became a distributed computing software infrastructure that runs on top of Linux. harv error: 대상 없음: CITEREFCarr42006 (help)