How biotech is driving computing
The most super of supercomputers are folding proteins, not crunching numbers. That's because the life sciences have overtaken physics as the source of the most challenging computing problems.
SAN FRANCISCO (Business 2.0 Magazine) -- Pop quiz: What new technology has the United States and Japan engaged in the virtual equivalent of the Space Race? The surprising answer: It's biotech.
Some background, in case you haven't been keeping up with current computing news: Japanese researchers have cobbled together a $9 million supercomputer called MDGrape-3 that last month broke the petaflop barrier for the first time.
A petaflop is a previously unheard of measure of computing power, indicating the capability to perform one quadrillion calculations every second. That's nearly three times faster than the nearest competitor, Blue Gene/L, created by IBM (Charts) and housed at the Lawrence Livermore National Laboratory in California.
Born to be bio
Both MDGrape-3 and BlueGene/L were created for basically the same purpose. It's not for anything headline-grabbing like Homeland Security or simulating the progress of global warming. No, they're both designed for road-testing new drugs.
Pharmaceutical companies have tens of thousands of chemical compounds to try out, each one a potential disease-curing money-maker, and they need to know how each one will bind to the trillions of different proteins in our bodies.
Proteins, the building blocks of life, are incredibly complex strings of amino acids that need to be mapped in 3-D. For this purpose, supercomputers are the next best thing to shrinking researchers to molecular size.
MDGrape-3 isn't officially the world's fastest supercomputer - it can't run the software that the official rankings demand. But that doesn't matter to Big Pharma companies like Merck (Charts). One of that company's subsidiaries has already asked the Japanese researchers for some computing time on the machine (also known as "the Protein Explorer"). The hyperfast computer will be able to tell them in seconds whether a given compound and a given protein will make sweet medicinal music together.
Meanwhile, IBM is leasing out Blue Gene time to a company called QuantumBio, which outsources protein-testing for drug companies. And - in a move to challenge its rivals on their home turf - IBM is also building a copy of Blue Gene/L for drug researchers in Japan.
The golden age of biotech
So what does all this mean? First, the hype that followed the complete mapping of the human genome in 2000 is true: The 21st century really is turning out to be the golden age of biotech.
All that gene-sequencing and protein-mapping is going to take us into a brave new world of health where you can walk into your doctor's office, have your DNA sequenced, find out what diseases you're at risk for, and then ingest a single chemical compound mapped to your proteins that will help eliminate your risk.
You can see that happening already, albeit in relatively small (and relatively pricey) ways. DNA Direct, a San Francisco-based genetic disease testing service, offers tests from between $199 and $3,000 for various ailments. For $500 you can get a colon cancer test by merely taking a scraping of your skin, which is far preferable to a colonoscopy.
Second, all that demand from biotech is going to boost our supercomputing power the same way the space race helped spur the development of the mainframe computers that were revolutionary for their time.
New branches of science that most of us are unfamiliar with are about to become household words, or at least recognizable job prospects: genomics (the study of genes), proteomics (the study of proteins), biostatistics and bioinformatics (which create algorithms to help analyze biological data).
Biotech research centers are springing up everywhere from Brazil to Zimbabwe. The field, awash in pharmaceutical cash, is sucking in researchers, programmers, engineers, biologists, computer scientists - anyone whose career path comes close to meeting the needs of computing-driven biotech.
The more that scientists examine the genome and pinpoint the locations of certain ailments, the more numbers that supercomputers will have to crunch to find a cure. This week, for instance, researchers at the University of Washington discovered the causes for mental retardation on chromosome 17.
Computing with DNA
In the long run, genetic science could change the face of computing itself. Indeed, scientists have already demonstrated the ability to actually use strands of DNA as a computer instead of the traditional silicon-based materials. The concept was first demonstrated as early as 1994 by University of Southern California computer scientist Leonard Adleman. Actual DNA logic gates - the building blocks of a real biocomputer - appeared just three years later.
It makes a lot of sense: After all, the body uses all those As, Ts, Cs and Gs to store information the same way a hard drive uses 1s and 0s. Why shouldn't we - especially as the material is more widely available than silicon, and the process of making it is far more environmentally friendly?
Right now, the problem is speed: it takes days to decode a problem using DNA logic gates. But the more we understand about the stuff, the faster that speed will get.
By the time Moore's Law runs into its brick wall - that is, when transistors can no longer be shrunk to a smaller size on silicon, which should be around 2020 - DNA could be ready to step in.
In theory, a DNA computer the size of a cubic centimeter could perform ten trillion calculations simultaneously in parallel - rather than serially, like that clunky room-sized MDGrape-3.