Computing with Quantum Cats (15 page)

BOOK: Computing with Quantum Cats
4.95Mb size Format: txt, pdf, ePub
ads

One of the questions that computer scientists working at the more esoteric end of their specialty asked in the 1950s and 1960s was whether it was possible, in principle, to build a computer that could simulate precisely (not just approximately; no matter how good an approximation is, it is never perfect) the workings of “classical” physics—the physics of colliding billiard balls and orbital motion of planets described so beautifully by Newton's laws. This brought them up against one of the most intriguing features of the world. Newton's laws are reversible. If we ignore things like friction, a collision between two individual pool balls, for example, looks the same going forwards in time or backwards in time. If we made a movie of the collision, and cut out the player making the shot, it would make as much sense backwards as it does forwards. But although each individual collision is reversible, if we made a similar movie showing the break in a game of pool, it would be obvious which was the future and which the past. The future is the situation where there is more disorder. This is a simple example of one of the most fundamental laws in science, the second law of thermodynamics. This says that, left to its own devices, the amount of disorder in a system (measured by a quantity called entropy) always increases, in spite of the reversibility of Newton's laws. The relevance of these ideas to computation comes about because entropy can be measured in terms of information. Another very simple example helps. If we have a glass containing water and some ice cubes, it takes more information to describe than if we have a glass that contains just water, even if it is the same glass but the ice has now melted—and once again, the direction of the “arrow of time” is clear. The greater the entropy of a system,
the more disordered it is, and the less information it contains.

The relationship between information (more specifically, information
transmission
) and thermodynamics was put on a mathematical basis by Claude Shannon, working at the Bell Telephone Laboratories in the 1940s. As this affiliation suggests, the idea was developed in the context of telephone communications, rather than computation, but it was soon taken up by the computer scientists. In this context, information is always a measure of the decrease of uncertainty provided by a message or a computation, and the direct motivation for Shannon's work was to find out how many calls could be sent simultaneously down a telephone cable without unacceptable loss of information. Incidentally, Shannon, who had been born in 1916, made his name developing telephone switching relay systems; during the Second World War he had worked at Bell Labs on cryptography and gunnery control systems, and met Alan Turing when he visited the labs in 1943. Shannon moved to MIT in 1956 and spent the rest of his career there.

Now, it is important to appreciate in what follows that we are talking about only those entropy changes involved in the act of computation itself, not the ones involved in the process of producing electricity for the computer to run on, or air conditioning for the room in which it operates. In doing so we are following the same convention physicists use in discussing the movements of frictionless pool balls to get insight into Newton's laws, and it leads to equally deep truths. But at first computer scientists were led in the wrong direction—ironically, by Johnny von Neumann, who gave a lecture in 1949 in which he calculated that there must be a minimum amount of energy required in the basic act of
computation, that is, switching a 0 into a 1 or vice versa. And this would make it impossible to simulate the world of classical physics precisely, because not only would it take energy (and increase entropy) to run a calculation in a computer, energy would be required and entropy would increase still more if the computer were run backwards. The operation would not be reversible—at least, not in entropy/information terms.

The first hint that this was not the whole story came in 1961, when IBM researcher Rolf Landauer pointed out that at least some aspects of computation need not involve dissipating energy at all. Landauer was one of the first people to appreciate that, as he put it, “information is physical,” and that the unspoken assumption that there was some abstract, “pure” form of computation that exists regardless of the machinery used is wrong. As he later said:

Information is inevitably tied to a physical representation. It can be engraved on stone tablets, denoted by a spin up or spin down, a charge present or absent, a hole punched in a card, or many other physical phenomena. It is not just an abstract entity; it does not exist except through physical embodiment. It is, therefore, tied to the laws of physics.
13

It is in that spirit that we can understand Landauer's insight of 1961. Imagine that the state of a single bit of information is represented by the presence of a ball in one of two holes, of equal depth, separated by a small hill. If the ball is in hole A, the bit is 1; if it is in hole B, the bit is 0. To change the state of the bit, the ball has to be rolled up the hill and down the other side. But the energy needed to raise the ball up one side of the hill is exactly equal to the amount of energy
released when it is lowered down the other side. So, in principle, a computation can be carried out without using any energy at all! Another way of visualizing a change which does not use up energy is to think of a skateboarder at the top of a (frictionless) half-pipe. Starting out with zero speed, the skateboarder accelerates down to the bottom of the pipe then decelerates up the other side, arriving at the top with zero speed. He or she has changed position without using any energy.

This, though, is not the end of the story. Imagine that we started with a whole array of balls each in its equivalent of hole A, like a computer starting out from a state with every bit registering 1, then performed a computation (using zero energy) which left it with some balls in “A” holes and some in “B” holes. Landauer also showed that reversing the process of computation to erase the information (in computer jargon, resetting the register to its initial state) sometimes
does
require energy. This is a little more tricky to understand, but depends on the fact that the computer has to operate some process which will restore it to an original situation regardless of the present state. In the example I have used, the original starting situation involves all the balls being in their starting positions. Clearly, the ball in hole B can be put back into hole A by using energy which pushes it back over the hill—energy which can again be reclaimed as it falls down the other side. But the computer does not “know” whether the ball is in hole B, so even if the ball is not there, the energy needed to reset the bit has to be applied, just in case it was there. Sometimes, this energy is wasted. This is the key insight provided by Landauer: that the computation itself need not involve any dissipation of energy (and so is, in principle, reversible), but
that, paradoxical though it may seem, energy is dissipated every time that information is discarded!

Another way of looking at this, in terms of reversibility of information rather than dissipation of energy, is that if there is more than one way in which a state can be reached; that is, if it can be reached by different routes, the computer does not “know” which is the right path to follow in order to reset the register—and this brings us back to the logic of gates. The AND gate is a good example. If we are trying to reset the registers and come across an AND gate in the state 1, we know for sure that the original input to the gate was 1 + 1. But if we find the AND gate in the state 0, we have no idea whether the input was 0 + 0, 1 + 0, or 0 + 1. A reversible computer, one which can simulate classical physics perfectly, has to be built entirely out of reversible gates. And such gates do exist.

FREDKIN, FEYNMAN AND FRIENDS

The next step in the theory of reversible computation came from Charles Bennett, another IBM researcher, in 1973. Inspired by reading Landauer's paper and hearing him talk a few years earlier, he wrote some very simple computer programs which were indeed reversible—that is, in carrying out the exercise he realized that in every case the computation consisted of two halves, the second half almost exactly undoing the work of the first half. As he later explained:

The first half would generate the desired answer…. as well as, typically, some other information…. The second half would dispose of the extraneous information by reversing the process that generated it, but would keep the desired
answer. This led me to realize that any computation could be rendered into this reversible format by accumulating a history of all information that would normally be thrown away, then disposing of this history by the reverse of the process that generated it. To prevent the reverse stage from destroying the desired output along with the undesired history, it suffices, before beginning the reverse stage, to copy the output on blank tape. [This] copying onto blank tape is already logically reversible.
14

This is as if you had a tablet (in his 1973 paper Bennett called it a “logically reversible Turing machine”) on which you could write, with a special pen, all the working for a problem, including the final answer. Then, you copy the answer onto a sheet of paper, and retrace all the writing on the tablet, using the pen as an eraser, removing the working as you go. You are left with the answer, and a blank tablet ready to use again.

Incidentally, the same discovery was made at about the same time by Oliver Penrose, a British theorist associated with the Open University; but Penrose's main research interests did not lie in computation and he did not follow up the idea. Bennett, but not Penrose, also made a connection which surely would have delighted Alan Turing, by discussing the biosynthesis of messenger RNA in living cells as an example of reversible computation.

Bennett's discovery was purely theoretical. He had not worked out the practicalities of the kind of gates needed to make such a computer, but he had proven that there was nothing in the laws of physics to forbid it. The next practical step was taken by Ed Fredkin, a scientist with an unusual background, who didn't even know about Bennett's work at the time he made his contribution.

Fredkin was an American college dropout who trained as a fighter pilot in the mid-1950s but had to quit flying because of asthma, and so worked for the air force on a project which introduced him to computer programming. After he returned to civilian life, he became a computer consultant, then founded his own company and became rich, all the while developing the idea, which most people dismissed as crazy at the time, that the Universe might be a gigantic digital computer. One of the reasons why people dismissed the idea as crazy was, of course, that the laws of physics are reversible, and in the 1950s and 1960s it was thought that computers had to be irreversible. But Fredkin was stubborn. He decided that if that was the flaw in the argument, he had to find a way to show that it was possible to make reversible computers. Fredkin had contacts at MIT through his computer company, and managed to wangle an appointment there, in 1966, as a visiting professor. He was so successful that the next year he became a full professor, at the age of thirty-four (still without having finished college!), and then Director of the Laboratory for Computer Science. It was in this capacity that he met Richard Feynman, and arranged to spend 1974 at Caltech so that Feynman could teach him quantum physics and he could teach Feynman about computers.
15
It was while he was at Caltech that Fredkin found the way to make a reversible computer.

Remember the NOT gate? Whichever input it is given, the output is the opposite. Put in 1, you get 0 out. Put in 0, you get 1 out. This is clearly reversible, but doesn't give you much scope to do interesting things. Fredkin's brilliant idea was to invent what is generally referred to as the Fredkin gate, which is complicated enough to do interesting things, but is still reversible.

A Fredkin gate has three channels—three inputs and three outputs. One of the channels, denoted by the letter
c
, is the control. This always passes the input through the gate unchanged. The other two channels, which we might as well call
a
and
b
, also pass the input unchanged if the
c
input is 0, but swap it over if the
c
input is 1. So if the input is
c
= 1,
a
= 1 and
b
= 0, the output is
c
= 1,
a
= 0 and
b
= 1. It is pretty obvious that this is reversible; even better, if the output of one Fredkin gate is fed straight into another Fredkin gate (
c
to
c
,
a
to
a, b
to
b
), the output is the same as the input to the first gate. Better still, it is possible, though I won't go into details here, to build
any
logic circuit using
only
Fredkin gates, in combinations where, for example, output
c
from one gate becomes input
a
for the next gate and output
a
goes to input
c
, while output
b
from the first gate becomes input
b
for the next gate. The final proof of this was found by one of Fredkin's students, Guy Steel, back at MIT.

All this means that it is in principle possible to build a reversible classical computer, which could simulate precisely the classical laws of physics, and that Fredkin's idea of the Universe being a gigantic computer is not so crazy after all. But at a fundamental level, the Universe operates on the basis of quantum physics, not classical physics. So the next question was, could a computer precisely simulate quantum physics? This is where Feynman comes back into the story.

At the end of the 1970s, Paul Benioff, of the Argonne National Laboratory in Illinois, developed the idea of a Turing machine, operating on a “tape” in the way Turing described in his classic paper of 1936, which ran on quantum mechanical principles. Although his argument was rather complicated and difficult to follow for non-specialists, he was
able to show that it is in principle possible to build a classical computer that operates on quantum principles. Since the real world seems to run on quantum principles, this was an important step forward; but it left open the question of whether a classical (or any) computer could be built that would simulate perfectly the quantum world.

BOOK: Computing with Quantum Cats
4.95Mb size Format: txt, pdf, ePub
ads

Other books

Ignition by Riley Clifford
Ability (Omnibus) by Hill, Travis
Madness by Sorcha MacMurrough
Defiance Rising by Miles, Amy
Dickens' Women by Miriam Margolyes
Game Saver by BJ Harvey