Michael is editor-at-large for DDJ. He can be contacted at [email protected].
Out here on the paradigmatic frontier, the termites of technology are hungrily chewing up the wood pulp of science fiction and spitting out new reality. Fabricated from spit and sand, strange structures reach aggressively toward the sky.
Whoosh. Pardon the purple prose. Personifying (insectifying?) technology is a colorful, metaphorical way of speaking, though, and metaphor is sometimes necessary to get at literal truth. ("Man's reach must exceed his grasp, or what's a meta for?")
And some segments of the boundary between metaphor and literal truth are antialiased. The Technological Imperative is no more fanciful a notion than the Gaia Myth or Richard Dawkins's Selfish Gene Theory. Before I get to the advances in quantum computing, reversible computing, and faster-than-light computing that I'll talk about this month, I'd like to think aloud for a few paragraphs on the Technological Imperative. If you'll indulge me? Thanks.
The Technological Imperative holds that when it's time to railroad, railroading happens. Obviously, neither railroading nor any other technological advance just "happens." Bright, motivated individuals have to see connections, draw conclusions, build artifacts, secure funding, develop markets, shepherd companies, run assembly lines, fill orders. None of it just "happens." But almost as obviously, when the pieces are on the table, somebody will put them together. It feels right, somehow, to say that technology will find the bright people and nudge them to put the pieces in place.
If so, maybe it nudges them through their human compulsion to answer questions. New technological advances provide as many questions as they do answers. The invention of the transistor asked some tantalizing questions, and the answer to one of them was the integrated circuit. The IC in turn asked compelling questions, and a cycle of questions and answers led to large-scale integration and to VLSI and to today's ongoing Moore's Law advances in semiconductor technology. Along the way, there were genuine breakthroughs that required hard work and that were anything but obvious. None of it "just happened." But while the answers were not obvious, the questions often were. I suggest that technology achieves its ends by dangling irresistible questions in front of engineers.
This idea of technology using people to achieve its ends sounds deterministic, as though technology knows where it's going and will inevitably get there. You might argue that the inevitability of technological advance is only apparent in retrospect, and I'd probably agree. I'm irrationally fond of the idea of free will.
But here's a different metaphor. (I seem to have lost control of the previous one, now that the termites have started dangling questions, a difficult image.)
Picture a mountain stream at its source. "Sources" is more like it: Random trickles here and there, their paths subject to every fallen leaf or dislodged pebble, but picking up momentum as they combine and trickle downhill. By the time the river is running through the valley below, that momentum has resolved all the upstream ambiguities and the river is flowing rapidly within precise banks that the river itself has cut into the earth. Factor in flooding and the metaphor gets more interesting, but I want to focus on the way the momentum of the river turns the unpredictable trickling of water high on the mountain into an inexorable, directed force downstream. Standing on top of the mountain, you'd have a hard time predicting the flow of the river. But standing downstream, you'd be a fool to deny the current.
We're riding in that current today, and we may be coming into white water.
Real Quantum Computing
You can already purchase working devices that do things that I was describing in this column a year ago in science-fictional computing scenarios. When I say "you can purchase," I am making some probably unwarranted assumptions about your budget, but at least the devices exist and can be purchased. Specifically, these are clever boxes that implement quantum information processing.
Quantum information processing, in theory, can blow past the theoretical limitations of conventional computing devices by packing computations into superpositions of states of quantum particles. Peter Shor wrote one of the first quantum algorithms, for factoring large numbers vastly faster than can be done by conventional methods, and ifor whensomebody builds a box that can run Shor's algorithm, it will be bad news for many existing encryption schemes. It is only sensible to assume that any information currently encrypted by classical encryption techniques is being stored by bad guys now and will be readable by quantum crypto devices in the future.
That only applies to classically encrypted information; quantum crypto is another story. Developing actual deployable quantum cryptographic systems puts the advantage back in the hands of those who want to keep things secret rather than those who want to break in. So far, none of this has happened, and quantum information processing has remained an area of research rather than a market with commercial products that you can buy and use.
Till now. Enter Magiq Technologies (http://www.magiqtech.com/).
This New York company is actually taking orders for two products based on quantum information processing. There's its Navajo Secure Gateway, priced between $50,000 and $100,000. Navajo implements a fiberoptic virtual private network on 1-GB Ethernet using a mixture of quantum cryptography and traditional cryptography. It implements triple-DES and AES encryption standards. They claim it's completely secure against all eavesdroppers. The box weighs 40 pounds and fits in a standard 19-inch rack. For a mere $40,000 or $50,000 you can buy their Qbox, a development system that uses quantum information processing to distribute a key; you'll have to develop the encryption system yourself.
The quantum component of these systems is called the Keyminder and it generates encryption keys, using a random quantum process rather than a random number generator. Using such an uncrackably secure key, the system then performs conventional encryption/decryption over a standard fiberoptic channel.
Magiq says that its boxes detect any attempt to breach security, because any attempt to read a transmission changes it detectably. And the company says that the systems can refresh keys at a rate of 100 per second, solving the stale-key problem.
My impression is that they aren't selling these boxes to corporate America. The technology is pretty raw, I think, and the requirement of a dedicated point-to-point optical link with a physical limit of 150km would be another reason. Price would keep the boxes out of some markets, but for certain applications $100K is no barrier.
Amazingly, Magiq is not alone in bringing quantum information processing products to market. Three Swiss companies are partnering to deliver what they call the first-ever integrated Quantum Key Infrastructure. Magiq might contest that, but the WISekey-ID Quantique Quantum Key Infrastructure (QKI) system was demonstrated in October of last year, and the underlying technology was used in an impressive demo earlier last year. The company id Quantique (http://www.idquantique.com/) is following the same dedicated fiberoptic path as Magiq. But it isn't taking orders yet, or wasn't when I wrote this. Check the web site for the current situation.
Currently, a big limitation on quantum crypto is the distance that the key can be sent. In theory, you could get past the few-kilometer limit by setting up repeaters, but quantum information is funny stuff, and apparently it's not so easy to boost a quantum signal's strength without in some manner reading itwhich is a no-no. Researchers at Los Alamos are looking at using satellites rather than fiberoptic cable to send quantum info, possibly extending the distance limit in that way. Moving beyond dedicate channels is another challenge that some are looking into. IBM, Mitsubishi, NEC, and Toyota are among the other companies doing research in quantum information processing, but none of them is close to releasing a product.
It looks like we'll have to do our quantum computer shopping with Magiq or maybe id Quantique for the immediate future.
Real Reversible Computing
Reversible computing is a notion that could be the dictionary example for the word counterintuitive. Reversible computations on reversible-computational hardware can reduce heat generation, the biggest problem facing semiconductor designers today. This could allow portable devices to run faster and supercomputers to be even more powerful than with conventional computing. Reversible computing does this by conserving information. Conventional microprocessors performing conventional computations discard information, and in doing so inevitably generate heat. Reversible computing, implemented at both the processor and algorithm levels, preserves information by using only computational steps that can be easily reversed.
It's not as simple as running the algorithm forward, copying the output, and then running it backward, but it's something like that. Now, if the idea that discarding information dissipates heat doesn't bother you, and if you are unfazed by the idea of snatching answers off the bottom of the arc of an algorithmic pendulum swing, maybe reversible computing doesn't seem counterintuitive to you. To me, it seems a little crazy. Maybe not quantum crazy, but crazy.
Reversible computing is in fact closely associated with quantum computing, but you can have reversible computing without getting into the quantum craziness. Literally you can have it, or anyway you can get the design and fab some chips, right now. Check the web site (http://www.cise.ufl.edu/research/revcomp/) of University of Florida professor Michael Frank, who has been involved in the design of four chips that implement reversible computing, including the Pendulum (cute name) processor.
Pendulum is the first fully adiabatic CPU, an architecturally reversible processor implemented in Split-level Charge Recovery Logic (SCRL). I'm not sure about the split-level business, but Charge Recovery Logic, according to Frank, is based on a standard CMOS logic and reduces power consumption by following two rules: "[F]irst, devices must not be switched 'on' when a potential exists across them, and second, the energy transfer through the device occurs in a controlled and gradual manner."
In Pendulum, inductors are used to control the energy transfer, and the business of assuring that no potential exists across a device is accomplished by requiring that all computations be reversible. According to Frank, "[t]his logic family's power consumption drops with the square of frequency (to first order) rather than linearly as with conventional CMOS."
I'm not sure I understand that, but in a more chatty vein, Frank describes what Pendulum does as adding oscillators or mechanical springs to chip circuitry to bounce energy back and forth, recapturing some of the heat that is normally dissipated on every clock cycle.
With reversible computing, it could be possible to build chips up in the third dimension, producing little cubes of processing power and giving Moore's Law some more life. But the world of reversible computing will be different from the environment programmers work in today. Frank says that for reversible computing to work, it needs to be implemented at all levels. That means reversible language compilers and reversible algorithms as well as reversible chips and computers.
Frank sees reversible computing as the only solution to the heat problem in semiconductor design. "In the long run," he says, "this is the only thing we can do."
Real Faster-than-Light Light
Penultimately, there's news that faster-than-light-speed technology can be used to speed up computers.
Okay, I know I lost a lot of you there. The idea is less radical than it sounds and doesn't actually break any laws of science, but it really is FTL, and it really could be used to speed up information transmission in semiconductors. Maybe.
At least two labs have recently demonstrated this FTL effect, which was predicted as long ago as 1993 by R.Y. Chiao of the University of California at Berkeley.
In one study (http://www.neci.nj.nec.com/homepages/lwan/faq.htm), researchers at the NEC Research Institute in Princeton demonstrated a light pulse that appeared to pass through a cell filled with cesium gas at a speed in excess of Einstein's speed limit, c. Because a pulse can be deconstructed into component sine waves that add and cancel out, by messing with the wave, you can cause the pulse to appear farther forward in the wave. So far forward, in Dr. Lijun Wang's experiment, that the pulse exits the far side of the cell "sooner than if it had traveled through the same thickness in vacuum by a time difference that is 310 folds [sic] of the vacuum transit time."
At no time during the experiment do the experimenter's hands leave his wrists, nor does any mass travel faster than the speed of light in a vacuum. Nor is any information communicated faster than lightspeed. An amplitude pulse, which is neither mass nor information and thus has no content and no physical significance, can travel faster than light, and that's what happens here. But how could this be of any use?
A second experiment brings the effect a little closer to home (see http://www.newscientist.com/news/news.jsp?id= ns99992796). Researchers Jeremy Munday and Bill Robertson at Middle Tennessee State University have sent an electrical signal 120 meters down a wire at something like four times lightspeed using basement lab equipment. But I'm not sure it's correct to call it a signal, since what was sentif "sent" is the wordcan't carry information. It's the same deal: A pulse is squeezed forward in a composite wave. In this case, however, the pulse was an electrical one rather than a light pulse, it traveled a significant distance, and the experimenters used equipment that can be found in many college science labs. If you went out and bought the necessary equipment just to do this experiment in your basement, it would set you back about $500.
So how could the technique used in these experiments speed up computers if there is noand if there can be nofaster-than-light transmission of information?
Well, signals in microprocessors travel significantly slower than c, and both experimenters hope that the technique can be used to give a purely slower-than-light speedup to the information pulsing through computer chips. How can it do this, specifically? Ah, neither experiment is very clear about that. But since no violation of physical law is involved, they could be onto something. Or not.
Green Goo
I'll leave you with some thoughts from reader Joseph Meeker on the Gray Goo menace, the idea that self-reproducing nanotech artifacts could multiply like biological organisms and wipe out life on earth, or just wipe out us, or just make life extremely interesting:
I read your column in DDJ (December) with interest. While I read of the graygoo-o-mania, it occurred to me that the goo may be green, having been assembled from pseudochlorophyll molecules with just the right admixture of NAD to provide transport to the hydrogen reducing sites in the minicells that I am sure engineers are covetous of producing. Chloroplasts are crystalloid in their inner structure and I think it won't be long before some enterprising lab succeeds in a self-assembly routine. Use: Fighting carbon dioxide bursts leading to global warming.
This leads to dispersal, somehow. In 1831 (I think), Charles Darwin, aboard the Beagle, collected packets of dust [that] was so prevalent as to sting the eyes. Examination showed dozens of infusoria to be the contents. The Beagle was near the Cabo de Verd Islands, and you might think the infusoria were from Africa. No. Most were from the New World. Or you might think the infusoria were from the sea. No. Most were freshwater species.
So we are on the road to tiny machines, which some conceive as little pumps or pinchers or what-have-you. It is no wonder you find it hard to conceive of the little devils as self-replicating. The enormous energy and organization to make the present generation argues against it. But once it is profitable (ecologically as well as economically) methodologies will arise. There may well be whole congeries of nanos acting as catalysts or nanophagists.
And what will they chew up, and what new reality will they spit out, I wonder?
DDJ