Read Computing with Quantum Cats Online
Authors: John Gribbin
John Bell expressed this clearly, in a paper first published in 1981.
20
After commenting that “those of us who are inspired by Einstein” would be happy to discover that quantum mechanics might be wrong, and that “perhaps Nature is not as queer as quantum mechanics,” he went on:
But the experimental situation is not very encouraging from this point of view. It is true that practical experiments fall far short of the ideal, because of counter inefficiencies, or analyzer inefficiencies, [or other practical difficulties]. Although there is an escape route there, it is hard for me to believe that quantum mechanics works so nicely for inefficient practical set-ups and yet is going to fail badly when sufficient refinements are made. Of more importance, in my opinion, is the complete absence of the vital
time
factor in existing experiments. The analyzers are not rotated during the flight of the particles. Even if one is obliged to admit some long range influence, it need not
travel faster than lightâand so would be much less indigestible. For me, then, it is of capital importance that Aspect is engaged in an experiment in which the time factor is introduced.
That experiment bore fruit soon after Bell highlighted its significance. But it had been a long time in the making.
Alain Aspect was born in 1947, which makes him the first significant person in this book to be younger than me (by just a year). He was brought up in the south-west of France, near Bordeaux, and had a childhood interest in physics, astronomy and science fiction. After completing high school, he studied at the Ãcole Normale Supérieure de Chachan, near Paris, and went on to the University of Orsay, completing his first postgraduate degree, roughly equivalent to an MPhil in the English-speaking world and sometimes known in France as the “little doctorate,” in 1971. Aspect then spent three years doing national service, working as a teacher in the former French colony of Cameroon. This gave him plenty of time to read and think, and most of his reading and thinking concerned quantum physics. The courses he had taken as a student in France had covered quantum mechanics from a mathematical perspective, concentrating on the equations rather than the fundamental physics, and scarcely discussing the conceptual foundations at all. But it was the physics that fascinated Aspect, and it was while in Cameroon that he read the EPR paper and realized that it contained a fundamental insight into the nature of the world. This highlights Aspect's approachâhe always went back to the sources wherever possible, studying Schrödinger's, or Einstein's, or Bohm's original publications, not second-hand interpretations of what they had said. But it was only when he returned to
France, late in 1974, that he read Bell's paper on the implications of the EPR idea; it was, he has said, “love at first sight.”
21
Eager to make a contribution, and disappointed to find that Clauser had already carried out a test of Bell's theorem, he resolved to tackle the locality loophole as the topic for his “big doctorate.”
Under the French system at the time, this could be a large, long-term project provided he could find a supervisor and a base from which to work. Christian Imbert and the Institute of Physics at the University of Paris-South, located at Orsay, agreed to take him on, and as a first step he visited Bell in Geneva early in 1975 to discuss the idea. Bell was enthusiastic, but warned Aspect that it would be a long job, and if things went wrong his career could be blighted. In fact, it took four years to obtain funding and build the experiment and two more years to start to get meaningful results, and Aspect did not receive his big doctorate (
doctorat d'état
) until 1983. But it was worth it.
Such an epic achievement could not be attained alone, and Aspect led a team that included Philippe Grangier, Gérard Roger and Jean Dalibard. The key improvement over earlier tests of Bell's theorem was to find, and apply, a technique for switching the polarizing filters while the photons were in flight, so that there was no way relevant information could be conveyed between A and B at less than light speed. To do this, they didn't actually rotate the filters while the photons were flying through the apparatus, but switched rapidly between two different polarizers oriented at different angles, using an ingenious optical-acoustic liquid mirror.
In this apparatus, the photons set out on their way towards the polarizing filters in the usual way, but part of the
way along their journey they encounter the liquid mirror. This is simply a tank of water, into which two beams of ultrasonic sound waves can be propagated. If the sound is turned off, the photons go straight through the water and arrive at a polarizing filter set at a certain angle. If the sound is turned on, the two acoustic beams interact to create a standing wave in the water, which deflects the photons towards a second polarizing filter set at a different angle. On the other side of the experiment, the second beam of photons is subjected to similar switching, and both beams are monitored; the polarization of large numbers of photons is automatically compared with the settings of the polarizers on the other side. It is relatively simple to envisage such an experiment, but immensely difficult to put it into practice, matching up the beams and polarizers, and recording all the data automaticallyâwhich is why the first results were not obtained until 1981, and more meaningful data not until 1982. But what matters is that the acoustic switching (carried out automatically, of course) occurred every 10 nanoseconds (1 ns is one billionth of a second), and it occurred
after
the photons had left their source. The time taken for light to get from one side of the experiment to the other (a distance of nearly 13 meters) was 40 ns. There is no way that a message could travel from A to B quickly enough to “tell” the photons on one side of the apparatus what was happening to their partners on the other side of the apparatus, unless that information traveled faster than light. Aspect and his colleagues discovered that even under these conditions Bell's inequality is violated. Local realism is not a good description of how the Universe works.
Partly because there had been a groundswell of interest in Bell's work since the first pioneering Clauser experiment, and
partly because of the way it closed the “speed of light” loophole, Aspect's experiment made a much bigger splash than the first-generation experiments, and 1982 is regarded as the landmark year (almost “year zero” as far as modern quantum theory is concerned) in which everything changed for quantum mechanics. One aspect of this change was to stimulate research along lines that led towards quantum computers. Another result of Aspect's work was that many other researchers developed ever more sophisticated experiments to test Bell's theorem ever more stringently; so far, it has passed every test. But although experiments like the ones carried out by Aspect and his colleagues are couched in terms of “faster than light signaling,” it is best not to think in terms of a message passing from A to B at all. What Bell's theorem tells us, and these experiments imply, is that there is an
instantaneous
connection between two (or more) quantum entities once they have interacted. The connection is not affected by distance (unlike, say, the force of gravity or the apparent brightness of a star); and it is specific to the entities that have interacted (only the photons in Aspect's experiment are correlated with one another; the rest of the photons in the Universe are not affected). The correlated “particles” are in a real sense different aspects of a single entity, even if they appear from a human perspective to be far apart. That is what entanglement really means. It is what non-locality is all about.
It's worth reiterating that this result, non-locality, is a feature of the Universe, irrespective of what kind of theory of physics is used to describe the Universe. Bell, remember, devised his theorem to test quantum mechanics, in the hope of proving that quantum mechanics was not a good description of reality. Clauser, Aspect and others have shown
that quantum mechanics
is
a good description of reality; but, far more important than that, they have shown that this is true because the Universe does not conform to local reality. Quantum physics is a good description of the Universe partly because quantum mechanics also does not conform to local reality. But
no
theory of physics that is a good description of the Universe can conform to local reality.
This is clearly Nobel Prizeâworthy stuff. Unfortunately, John Bell did not live long enough to receive the prize. On September 30, 1990, just a few days after receiving a clean bill of health at a regular checkup, he died unexpectedly of a cerebral hemorrhage. He was just sixty-two. Unbeknown to Bell, he had, in fact, been nominated for the Nobel Prize in physics that year, and although it usually takes several years of nominations for a name to rise to the top of the list, there is no doubt he would have got one sooner rather than later. The surprise is that Clauser and Aspect have not yet been jointly honored in this way.
As is so often the case in quantum physics, there are several different ways of understanding how we get from Bell's theorem and entanglement to quantum computation. One way of getting an insight into what is going onâthe one I preferâis to invoke what is called the “Many Worlds” (or “Many Universes”) Interpretation of quantum mechanicsâin everyday language, the idea of parallel universes. Although he was not attracted by the idea, Bell admitted to Paul Davies that:
There is some merit in the many-universes interpretation, in tackling the problem of how something can apparently happen far away sooner than it could without faster-than
light signaling. If, in a sense, everything happens, all choices are realized (somewhere among all the parallel universes), and no selection is made between the possible results of the experiment until later (which is what one version of the many-universes hypothesis implies), then we get over this difficulty.
22
Bizarre though it may seem, this is exactly the view espoused by the man who kick-started the modern study of quantum computation in the 1980s, David Deutsch. But that story belongs in
Part Three
of this book.
When someone such as Richard Feynman says that the Universe is digital, it is the same as saying that it is “quantized,” as in quantum physics. Binary digitsâbitsâare quantized. They can either be 0 or 1, but they cannot be anything in between. In the quantum world, everything is digitized. For example, entities such as electrons have a property known as spin. This name is an unfortunate historical accident, and is essentially just a label; you should not think of the electron as spinning like a top. An electron can have spin ½ or spin â½ but it cannot have any other value. Electrons are part of a family of what we used to think of as particles, all of which have half-integer spinâ½,
and so on. These are known as fermions. The other family of particles that make up the everyday world, known as bosons, all have integer spinâ1, 2 and so on. But there are no in-between values. A photon, a “particle of light,” is a boson with spin 1. This kind of quantization, or digitization, applies to
everything that has ever been measured in the quantum world. So it is a natural step for quantum physicists to suppose that at some tiny scale, beyond anything that can yet be measured, space and time themselves are quantized.
The scale on which the quantization, or graininess, of space would become noticeable is known as the Planck length, in honor of Max Planck, the German physicist who, at the end of the nineteenth century, made the breakthrough which led to the realization that the behavior of light could be explained in terms of photons. The size of the Planck length is worked out from the relative sizes of the constant of gravity, the speed of light, and a number known as Planck's constant, which appears at the heart of quantum mechanicsâfor example, the energy of a photon corresponding to a certain frequency (or color) of light is equal to that frequency multiplied by Planck's constant. The Planck length is 0.000000000000000000000000000000001 cm, or 10
â33
cm in mathematical notation. A single proton is roughly 10
20
Planck lengths across,
1
and it is no surprise that the effects of this graininess do not show up even in our most subtle experiments.
The smallest possible interval of time (the quantum of time) is simply the time it would take light to cross the Planck length, and is equal to 10
â43
seconds. One intriguing consequence of this is that as there could not be any shorter time, or smaller time interval, then within the framework of the laws of physics as understood today we have to say that the Universe came into existence (was “born,” if you like) with an age of 10
â43
seconds. This has profound implications for cosmology, but this is not the place to go into them.
It also has profound implications for universal quantum
simulators. The important point, which Feynman emphasized in his 1981 MIT lecture, is that if space itself is a kind of lattice and time jumps discontinuously, then everything that is going on in a certain volume of space for a certain amount of time can be described in terms of a finite number of quantitiesâa finite number of digits. That number might be enormous, but it is not infinite, and that is all that matters.
Everything
that happens in a finite volume of space and time could be exactly simulated with a finite number of logical operations in a quantum computer. The situation would be very similar to the way physicists analyze the behavior of crystal lattices, and Feynman was able to show that the behavior of bosons is amenable to this kind of analysis, although in 1981 he was not able to prove that all quantum interactions could be imitated by a simulator. His work was, however, extended at MIT in the 1990s by Seth Lloyd, who proved that quantum computers can in principle simulate the behavior of more general quantum systems.
There's another way of thinking about the digitization of the world. Many accounts of the quantum world imply, or state specifically, that the “wave” and “particle” versions are of equal status. I've said so myself. But are they? It is a fundamental feature of a wave that it is continuous; it is a fundamental feature of a particle that it is not continuous. A wave, like the ripples spreading out from a pebble dropped in a still pond, can spread out farther and farther, getting weaker and weaker all the time until, far away from the place where the pebble was dropped, the ripples can no longer be detected at all. But either a particle is there or it isn't.
Light is often regarded as a wave, the ripples in something called an electromagnetic field. But those ripples, if they exist,
do not behave like ripples in a pond. The most distant objects we can detect in the Universe are more than 10 billion light years away, and light from them has been traveling for more than 10 billion years on its way to us. It is astonishing that we can detect it at all. But what is it that we actually detect?
Not
a very faint ripple of a wave. Astronomers actually detect individual photons arriving at their instruments, sometimes literally one at a time. As Feynman put it, “you put a counter out there and you find âclunk,' and nothing happens for a while, âclunk,' and nothing happens for a while.”
2
Each “clunk” is a photon. Images of faint objects can be built up over many hours by combining these photons to make a pictureâin one outstanding example, an image known as the Hubble Ultra-Deep Field was built up using photons gathered over nearly a million seconds (277 hours) of observation time on the Hubble Space Telescope. The fainter the object, the fewer photons come from it each second, or each million seconds; but each photon is the same as an equivalent photon from a bright object. Red photons always have a certain energy, blue photons a different energy, and so on. But you never see half, or any other fraction, of a photon; it's all or nothing. Which is why it is possible to simulate the Universe using a digital computer (provided it is a quantum computer). “You don't find a tiny field, you don't have to imitate such a tiny field, because the world that you're trying to imitate, the physical world, is not the classical world, and it behaves differently,” said Feynman. “All these things suggest that it's really true, somehow, that the physical world is representable in a discretized way.” This is the most important insight to carry forward into our discussion of quantum computers. Quantum computers actually provide a
better
representation
of reality than is provided by our everyday experiences and “common sense.”