Read Quantum Man: Richard Feynman's Life in Science Online

Authors: Lawrence M. Krauss

Tags: #Science / Physics

Quantum Man: Richard Feynman's Life in Science (3 page)

BOOK: Quantum Man: Richard Feynman's Life in Science
3.93Mb size Format: txt, pdf, ePub
ads

Amost thirty years before Huygens’s work, Fermat too reasoned that light should travel more slowly in dense media than in less dense media. Instead of thinking in terms of whether light was a wave or particle, however, Fermat the mathematician showed that in this case one could explain the trajectory of light in terms of a general mathematical principle, which we now call
Fermat’s principle of least time
. As he demonstrated, light would follow precisely the same bending trajectory determined by Snell if “light travels between two given points along the path of shortest time.”

Heuristically this can be understood as follows. If light travels more quickly in the less dense medium, then to get from
A
to
B
(see figure) in the shortest time, it would make sense to travel a longer distance in this medium, and a shorter distance in the second medium in which it travels more slowly. Now, it cannot travel for too long in the first medium, otherwise the extra distance it travels would more than overcome the gain obtained by traveling at a faster speed. One path is just right, however, and this path turns out to involve a bending trajectory that exactly reproduces the trajectory Snell observed.

Snell’s law

Fermat’s principle of least time is a mathematically elegant way of determining the path light takes without recourse to any mechanistic description in terms of waves or particles. The only problem is that when one thinks about the physical basis of this result, it seems to suggest
intentionality
, so that, like a commuter in Monday-morning rush-hour listening to the traffic report, light somehow considers all possible paths before embarking on its voyage, and ultimately chooses the one that will get it to its destination fastest.

But the fascinating thing is that we don’t need to ascribe any intentionality to light’s wanderings. Fermat’s principle is a wonderful example of an even more remarkable property of physics, a property that is central to the amazing and a priori unexpected fact that nature is comprehensible via mathematics. If there is any one property that was a guiding light for Richard Feynman’s approach to physics, and essential to almost all of his discoveries, it was this one, which he thought was so important that he referred to it at least two different times during his Nobel Prize address. First, he wrote,

It always seems odd to me that the fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with a little mathematical fiddling you can show the relationship. . . . it was something I learned from experience. There is always another way to say the same thing that doesn’t look at all like the way you said it before. . . . I think it is somehow a representation of the simplicity of nature. I don’t know what it means, that nature chooses these curious forms, but maybe that is a way of defining simplicity. Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing.

And later (and more important for what was to come), he added,

Theories of the known, which are described by different physical ideas, may be equivalent in all their predictions and are hence scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in one’s attempt to understand what is not yet understood.

Fermat’s principle of least time clearly represents a striking example of this strange redundancy of physical law that so fascinated Feynman, and also of the differing “psychological utilities” of the different prescriptions. Thinking about the bending of light in terms of electric and magnetic forces at the interface between media reveals something about the properties of the media. Thinking about it in terms of the speed of light itself reveals something about light’s intrinsic wavelike character. And thinking about it in terms of Fermat’s principle may reveal nothing about specific forces or about the wave nature of light, but it illuminates something deep about the nature of motion. Happily, and importantly, all of these alternate descriptions result in identical predictions.

Thus we can rest easy. Light does not
know
it is taking the shortest path. It just
acts
like it does.

I
T WASN’T THE
principle of least time, however, but an even subtler idea that changed Feynman’s life that fateful day in high school. As Feynman later described it, “When I was in high school, my physics teacher—whose name was Mr. Bader—called me down one day after physics class and said, ‘You look bored; I want to tell you something interesting.’ Then he told me something that I found absolutely fascinating, and have, since then, always found fascinating . . . the principle of least action.”
Least action
may sound like an expression that is more appropriate to describing the behavior of a customer service representative at the phone company than a field like physics, which is, after all, centered around describing actions. But the least action principle is very similar to Fermat’s principle of least time.

The principle of least time tells us that light always takes the path of shortest time. But what about baseballs and cannonballs, planets, and boomerangs? They don’t necessarily behave so simply. Is there something other than
time
that is minimized whenever these objects follow the paths prescribed by the forces acting on them?

Consider any object in motion, say, a falling weight. Such an object is said to possess two different kinds of energy. One is
kinetic energy
, and it is related to the motion of objects (and derives from the Greek word for movement). The faster an object moves, the larger the kinetic energy. The other part of an object’s energy is much subtler to ascertain, as reflected in its name:
potential energy
. This kind of energy may be hidden, but it is responsible for the ability of an object to do work later on. For example, a heavy weight falling off the top of a tall building will do more damage (and hence more work) smashing the roof of a car, than will a similar weight dropped from several inches above the car. Clearly the higher the object, the greater its potential to do work, and hence the greater its potential energy.

Now, what the least action principle states is that the
difference
between the kinetic energy of an object at any instant and its potential energy at the same instant, when calculated at each point along a path and then added up along the path, will be smaller for the actual path the object takes than for any other possible trajectory. An object somehow adjusts its motion so the kinetic energy and the potential energy are as closely matched, on average, as is possible.

If this seems mysterious and unintuitive, that is because it is mysterious and unintuitive. How on earth would anyone ever come up with this combination in the first place, much less apply it to the motion of everyday objects?

For this we thank the Italian mathematician-physicist Joseph Louis Lagrange, who is best known for his work on celestial mechanics. For example, he determined the points in the solar system where the gravitational attraction from the different planets precisely cancels the frame of reference of the orbiting body. They are called Lagrange points. NASA now sends numerous satellites out to these points so that they can remain in stable orbits and study the universe.

Lagrange’s greatest contribution to physics, however, may have involved his reformulation of the laws of motion. Newton’s laws relate the motion of objects to the net forces acting on them. However, Lagrange managed to show that Newton’s laws of motion were precisely reproduced if one used the “action,” which is the sum over a path of the differences between kinetic and potential energy, now appropriately called a Lagrangian, and then determined precisely what sorts of motion would produce those paths that minimized this quantity. The process of minimization, which required the use of calculus (also invented by Newton), gave very different mathematical descriptions of motion from Newton’s laws, but, in the spirit of Feynman, they were mathematically identical, even if “psychologically” very different.

I
T WAS THIS
strange principle of least action, often called Lagrange’s principle, that Mr. Bader introduced the teenaged Feynman to. Most teens would not have found it fascinating or even comprehensible, but Feynman did, or so he remembered when he was older.

However, if the young Feynman had any inkling at the time that this principle would return to completely color his own life story, he certainly didn’t behave that way as he began to learn more about physics once he entered MIT. Quite the contrary. His best friend as an undergraduate at MIT, Ted Welton, with whom he worked through much of undergraduate and even graduate physics, later described Feynman’s “maddening refusal to concede that Lagrange might have something useful to say about physics. The rest of us were appropriately impressed with the compactness, elegance, and utility of Lagrange’s formulation, but Dick stubbornly insisted that real physics lay in identifying all the forces and properly resolving them into components.”

Nature, like life, takes all sorts of strange twists and turns, and most important, it is largely insensitive to one’s likes and dislikes. As much as Feynman tried early on to focus on understanding motion in a way that meshed with his naive intuition, his own trajectory to greatness involved a very different path. There was no unseen hand guiding him. Instead, he forced his intuition to bend to the demands of the problems of the time, rather than vice versa. The challenge required endless hours and days and months of hard work training his mind to wrap around a problem that the greatest minds in twentieth-century physics had, up to that point, not been able to solve.

When he really needed it, Feynman would find himself returning once again to the very principle that had turned him on to physics in the first place.

CHAPTER
2

The Quantum Universe

I was always worried about the physics. If the idea looked lousy, I said it looked lousy. If it looked good, I said it looked good.

—R
ICHARD
F
EYNMAN

F
eynman was fortunate to have stumbled upon Ted Welton in his sophomore year at MIT, while both were attending, as the only two sophomores, an advanced graduate course in theoretical physics. Kindred spirits, each had been checking advanced mathematics texts out of the library, and after a brief period of trying to outdo each other, they decided to collaborate “in the struggle against a crew of aggressive-looking seniors and graduate students” in the class.

Together they pushed each other to new heights, passing back and forth a notebook in which each would contribute solutions and questions on topics ranging from general relativity to quantum mechanics, each of which they apparently had taught themselves. Not only did this encourage Feynman’s seemingly relentless quest to derive all of physics on his own terms, but also it provided some object lessons that would stay with him for the rest of his life. One in particular is worth noting. Feynman and Welton tried to determine the energy levels of electrons in a hydrogen atom by generalizing the standard equation of quantum mechanics, called the
Schrödinger equation
, to incorporate the results of Einstein’s special relativity. In so doing they rediscovered what was actually a well-known equation, the Klein-Gordon equation. Unfortunately, after Welton urged Feynman to apply this equation to understand the hydrogen atom, the attempt produced results that completely disagreed with experimental results. This is not surprising because the Klein-Gordon equation was known to be the wrong equation to use to describe relativistic electrons, as the brilliant theoretical physicist Paul Dirac had demonstrated only a decade earlier, in the process of earning the Nobel Prize for deriving the right equation.

Feynman described his experience as a “terrible” but very important lesson that he never forgot. He learned not to rely on the beauty of a mathematical theory or its “marvelous formality,” but rather to recognize that the test of a good theory was whether one could “bring it down against the real thing”—namely, experimental data.

Feynman and Welton were not learning all of physics completely on their own. They also attended classes. During the second semester of their sophomore year they had sufficiently impressed the professor of their theoretical physics course, Philip Morse, that he invited the two of them, along with another student, to study quantum mechanics with him in a private tutorial one afternoon a week during their junior year. Later he invited them to start a “real research” program in which they calculated properties of atoms more complicated than hydrogen, and in the process they also learned how to work the first generation of so-called calculating machines, another skill that would later serve Feynman well.

By the time of his final year as an undergraduate, Feynman had essentially mastered most of the undergraduate and graduate physics curricula, and he had already become excited enough by the prospect of a research career that he made the decision to proceed on to graduate school. In fact, his progress had been so impressive that during his junior year the physics department recommended that he be granted a bachelor’s degree after three years instead of four. The university denied the recommendation, so instead, during his senior year, he continued his research and wrote a paper on the quantum mechanics of molecules that was published in the prestigious
Physical Review
, as was a paper on cosmic rays. He also took some time to reinforce his fundamental interest in the applications of physics, and enrolled in metallurgy and laboratory courses—courses that would later serve him well in Los Alamos—and even built an ingenious mechanism to measure the speeds of different rotating shafts.

Not everyone was convinced that Feynman should take the next major step in his education. Neither of his parents had completed a college education, and the rationale for their son completing yet another three or four years of study beyond an undergraduate degree was unclear. Richard’s father, Melville Feynman, visited MIT in the fall of 1938 to speak to Professor Morse and ask if it was worth it, if his son was good enough. Morse answered that Feynman was the brightest undergraduate student he had ever encountered, and yes, graduate school not only was worth it, but was required if Feynman wanted to continue a career in science. The die was cast.

Feynman’s preference was to stay on at MIT. However, wise physics professors generally encourage their students, even their best ones, to pursue their graduate studies at a new institution. It is important for students to get a broad exposure early in their career to the different styles of doing science, and to different focuses of interest, as spending an entire academic career at one institution can be limiting for many people. And so it was that Richard Feynman’s senior dissertation advisor, John Slater, insisted that he go to graduate school elsewhere, telling him, “You should find out what the rest of the world is.”

Feynman was offered a scholarship to Harvard for graduate school without even applying because he had won the William Lowell Putnam Mathematical Competition in 1939. This is the most prestigious and demanding national mathematics contest open to undergraduates, and was then in its second year. I remember when I was an undergraduate the very best mathematics students would join their university’s team and solve practice problems for months ahead of the examination. No one solves all the problems on the exam, and in many years a significant fraction of the entrants fail to solve a single problem. The mathematics department at MIT had asked Feynman to join MIT’s team for the competition in his senior year, and the gap between Feynman’s score and the scores for all of the other entrants from across the country apparently astounded those grading the exam, so he was offered the Harvard prize scholarship. Feynman would later sometimes feign ignorance of formal mathematics when speaking about physics, but his Putnam score demonstrated that as a mathematician, he could compete with the very best in the world.

But Feynman turned down Harvard. He had decided he wanted to go to Princeton, I expect for the same reason that so many young physicists wanted to go there: that was where Einstein was. Princeton had accepted him and offered him a job as future Nobel laureate Eugene Wigner’s research assistant. Fortunately for Feynman, he was assigned instead to a young assistant professor, John Archibald Wheeler, a man whose imagination matched Feynman’s mathematical virtuosity.

In a remembrance of Feynman after his death, Wheeler recalled a discussion among the graduate admissions committee in the spring of 1939, during which one person raved about the fact that no one else applying to the university had math and physics aptitude scores anywhere near as high as Feynman’s (he scored 100 percent in physics), while another member of the committee complained at the same time that they had never let anyone in with scores so low in history and English. Happily for the future of science, physics and math prevailed.

Interestingly, Wheeler did not describe another key issue, of which he may not have been aware: the so-called Jewish question. The head of the physics department at Princeton had written to Philip Morse about Feynman, asking about his religious affiliation, adding, “We have no definite rule against Jews but have to keep their proportion in our department reasonably small because of the difficulty of placing them.” Ultimately it was decided that Feynman was not sufficiently Jewish “in manner” to get in the way. The fact that Feynman, like many scientists, was essentially uninterested in religion never arose as part of the discussion.

M
ORE IMPORTANT THAN
all of these external developments, however, was the fact that Feynman had now proceeded to the stage in his education where he could begin to think about the really exciting stuff—namely, the physics that didn’t make sense. Science at the forefront is always on the verge of paradox and inconsistency, and like a bloodhound, great physicists focus precisely on these elements because that is where the true quarry lies.

The problem that Feynman later said he “fell in love with” as an undergraduate had been a familiar part of the centerpiece of theoretical physics for almost a century: the classical theory of electromagnetism. Like many deep problems, it can be simply stated. The force between two like charges is repulsive, and therefore it takes work to bring them closer together. The closer they get, the more work it takes. Now imagine a single electron. Think of it as a “ball” of charge with a certain radius. To bring all the charge together at this radius to make up the electron would thus take work. The energy built up by the work bringing the charge together is commonly called the
self-energy
of the electron.

The problem is that if we were to shrink the size of the electron down to a single point, the self-energy associated with the electron would go to infinity, because it takes an infinite amount of energy to bring all the charge together at a single point. This problem had been known for some time and various schemes had been put together to solve it, but the simplest was to assume that the electron really wasn’t confined to a single point, but had a finite size.

By early in the twentieth century this issue took on a different perspective, however. With the development of quantum mechanics, the picture of electrons, and electric and magnetic fields, had completely changed. So-called wave-particle duality, for example, a part of quantum theory, said that both light
and
matter, in this case electrons, sometimes behaved as if they were particles and sometimes as if they were waves. As our understanding of the quantum universe grew, while the universe also got stranger and stranger, nevertheless some of the key puzzles of classical physics disappeared. But others remained, and the self-energy of the electron was one of them. In order to put this in context, we need to explore the quantum world a little bit.

Quantum mechanics has two central characteristics, both of which completely defy all of our standard intuition about the world. First, objects that are behaving quantum mechanically are the ultimate multitaskers. They are capable of being in many different configurations at the same time. This includes being in different places and doing different things simultaneously. For example, while an electron behaves almost like a spinning top, it can also act as if it is spinning around in many different directions at the same time.

If an electron acts as if it is spinning counterclockwise around an axis pointing up from the floor, we say it has
spin up
. If it is spinning clockwise, we say it has
spin down
. At any instant the probability that an electron has spin up may be 50 percent, and the probability that it has spin down may be 50 percent. If electrons behaved as our classical intuition would suggest, the implication would be that each electron we measure has either spin up or spin down, and that 50 percent of the electrons will be found to be in one configuration and 50 percent in the other.

In one sense this is true. If we measure electrons in this way, we will find that 50 percent are spin up and 50 percent are spin down.
But
, and this is a very important
but
, it is incorrect to assume that each electron is in one configuration or another before we make the measurement. In the language of quantum mechanics, each electron is in a “superposition of states of spin up and spin down” before the measurement. Put more succinctly, it is spinning both ways.

How do we know that the assumption that electrons are in one or another configuration is “incorrect”? It turns out that we can perform experiments whose results depend on what the electron is doing when we are not measuring it, and the results would come out differently if the electron had been behaving sensibly, that is, in one or another specific configuration between measurements.

The most famous example of this involves shooting electrons at a wall with two slits cut into it. Behind the wall is a scintillating screen, much like the screen on old-fashioned vacuum-tube televisions, that lights up wherever an electron hits it. If we don’t measure the electrons between the time they leave the source and when they hit the screen, so that we cannot tell which slit each electron goes through, we would see a pattern of bright and dark patches emerge on the rear screen—precisely the kind of “interference pattern” that we would see for light or sound waves that traverse a two-slit device, or perhaps more familiarly, the pattern of alternating ripples and calm that often results when two streams of water converge together. Amazingly, this pattern emerges even if we send only a single electron toward the two slits at any time. The pattern thus suggests that somehow the electron “interferes” with itself after going through both slits at the same time.

At first glance this notion seems like nonsense, so we alter the experiment slightly. We put a nondestructive electron detector by each slit and then send the electrons through. Now we find that for each electron, one and only one detector will signal that an electron has gone through at any time, allowing us to determine that indeed each electron goes through one and only one slit, and moreover we can determine which slit each electron has gone through.

So far so good, but now comes the quantum kicker. If we examine the pattern on the screen after this seemingly innocent intervention, the new pattern is completely different from the old pattern. It now resembles the pattern we would get if we were shooting bullets at such a screen through the two-slit barrier—namely, there will be a bright spot behind each slit, and the rest will be dark.

So, like it or not, electrons and other quantum objects can perform classical magic by doing several different things at the same time, at least as long as we do not observe them in the process.

The other fundamental property at the heart of quantum mechanics involves the so-called
Heisenberg uncertainty principle
. What this principle says is that there are certain combinations of physical quantities, such as the position of a particle and its momentum (or speed), that we cannot measure at the same instant with absolute accuracy. No matter how good our microscope or measuring device is, multiplying the uncertainty in position by the uncertainty in momentum never results in zero; the product is always bigger than some number, and this number is called
Planck’s constant
. It is this number that also determines the scale of the spacing between energy levels in atoms. In other words, if we measure the position very accurately so that the uncertainty in position is small, that means our knowledge of the momentum or speed of the particle must be very inaccurate, so that the product of the uncertainty in position and the uncertainty in momentum exceeds Planck’s constant.

BOOK: Quantum Man: Richard Feynman's Life in Science
3.93Mb size Format: txt, pdf, ePub
ads

Other books

Dear Miffy by John Marsden
The Wolf on the Hill by Jorja Lovett
Final Sentence by Daryl Wood Gerber
Masquerade by Fornasier Kylie
Curse of the Undead Dragon King (Skeleton Key) by Konstanz Silverbow, Skeleton Key
Caught Read-Handed by Terrie Farley Moran
A Purrfect Romance by Bronston, J.M.