Authors: James Gleick
He does not ordinarily argue about philosophical implications… . Questions about a theory which do not affect its ability to predict experimental results correctly seem to me quibbles about words, … and I am quite content to leave such questions to those who derive some satisfaction from them.
When Slater spoke for common sense, for practicality, for a theory that would be experiment’s handmaid, he spoke for most of his American colleagues. The spirit of Edison, not Einstein, still governed their image of the scientist. Perspiration, not inspiration. Mathematics was unfathomable and unreliable. Another physicist, Edward Condon, said everyone knew what mathematical physicists did: “they study carefully the results obtained by experimentalists and rewrite that work in papers which are so mathematical that they find them hard to read themselves.” Physics could really only justify itself, he said, when its theories offered people a means of predicting the outcome of experiments—and at that, only if the predicting took less time than actually carrying out the experiments.
Unlike their European counterparts, American theorists did not have their own academic departments. They shared quarters with the experimenters, heard their problems, and tried to answer their questions pragmatically. Still, the days of Edisonian science were over and Slater knew it. With a mandate from MIT’s president, Karl Compton, he was assembling a physics department meant to bring the school into the forefront of American science and meanwhile to help American science toward a less humble world standing. He and his colleagues knew how unprepared the United States had been to train physicists in his own generation. Leaders of the nation’s rapidly growing technical industries knew it, too. When Slater arrived, the MIT department sustained barely a dozen graduate students. Six years later, the number had increased to sixty. Despite the Depression the institute had completed a new physics and chemistry laboratory with money from the industrialist George Eastman. Major research programs had begun in the laboratory fields devoted to using electromagnetic radiation as a probe into the structure of matter: especially spectroscopy, analyzing the signature frequencies of light shining from different substances, but also X-ray crystallography. (Each time physicists found a new kind of “ray” or particle, they put it to work illuminating the interstices of molecules.) New vacuum equipment and finely etched mirrors gave a high precision to the spectroscopic work. And a monstrous new electromagnet created fields more powerful than any on the planet.
Julius Stratton and Philip Morse taught the essential advanced theory course for seniors and graduate students, Introduction to Theoretical Physics, using Slater’s own text of the same name. Slater and his colleagues had created the course just a few years before. It was the capstone of their new thinking about the teaching of physics at MIT. They meant to bring back together, as a unified subject, the discipline that had been subdivided for undergraduates into mechanics, electromagnetism, thermodynamics, hydrodynamics, and optics. Undergraduates had been acquiring their theory piecemeal, in ad hoc codas to laboratory courses mainly devoted to experiment. Slater now brought the pieces back together and led students toward a new topic, the “modern atomic theory.” No course yet existed in quantum mechanics, but Slater’s students headed inward toward the atom with a grounding not just in classical mechanics, treating the motion of solid objects, but also in wave mechanics—vibrating strings, sound waves bouncing around in hollow boxes. The instructors told the students at the outset that the essence of theoretical physics lay not in learning to work out the mathematics, but in learning how to apply the mathematics to the real phenomena that could take so many chameleon forms: moving bodies, fluids, magnetic fields and forces, currents of electricity and water, and waves of water and light. Feynman, as a freshman, roomed with two seniors who took the course. As the year went on he attuned himself to their chatter and surprised them sometimes by joining in on the problem solving. “Why don’t you try Bernoulli’s equation?” he would say—mispronouncing
Bernoulli
because, like so much of his knowledge, this came from reading the encyclopedia or the odd textbooks he had found in Far Rockaway. By sophomore year he decided he was ready to take the course himself.
The first day everyone had to fill out enrollment cards: green for seniors and brown for graduate students. Feynman was proudly aware of the sophomore-pink card in his own pocket. Furthermore he was wearing an ROTC uniform; officer’s training was compulsory for first- and second-year students. But just as he was feeling most conspicuous, another uniformed, pink-card-carrying sophomore sat down beside him. It was T. A. Welton. Welton had instantly recognized the mathematics whiz from the previous spring’s open house.
Feynman looked at the books Welton was stacking on his desk. He saw Tullio Levi-Civita’s
Absolute Differential Calculus
, a book he had tried to get from the library. Welton, meanwhile, looked at Feynman’s desk and realized why he had not been able to find A. P. Wills’s
Vector and Tensor Analysis
. Nervous boasting ensued. The Saratoga Springs sophomore claimed to know all about general relativity. The Far Rockaway sophomore announced that he had already learned quantum mechanics from a book by someone called Dirac. They traded several hours’ worth of sketchy knowledge about Einstein’s work on gravitation. Both boys realized that, as Welton put it, “cooperation in the struggle against a crew of aggressive-looking seniors and graduate students might be mutually beneficial.”
Nor were they alone in recognizing that Introduction to Theoretical Physics now harbored a pair of exceptional young students. Stratton, handling the teaching chores for the first semester, would sometimes lose the thread of a string of equations at the blackboard, the color of his face shifting perceptibly toward red. He would then pass the chalk, saying, “Mr. Feynman, how did you handle this problem,” and Feynman would stride to the blackboard.
A law of nature expressed in a strange form came up again and again that term: the principle of least action. It arose in a simple sort of problem. A lifeguard, some feet up the beach, sees a drowning swimmer diagonally ahead, some distance offshore and some distance to one side. The lifeguard can run at a certain speed and swim at a certain lesser speed. How does one find the fastest path to the swimmer?
The path of least time. The lifeguard travels faster on land than in water; the best path is a compromise. Light-which also travels faster through air than through water-seems somehow to choose precisely this path on its way from an underwater fish to the eye of an observer.
A straight line, the shortest path, is not the fastest. The lifeguard will spend too much time in the water. If instead he angles far up the beach and dives in directly opposite the swimmer—the path of least water—he still wastes time. The best compromise is the path of least time, angling up the beach and then turning for a sharper angle through the water. Any calculus student can find the best path. A lifeguard has to trust his instincts. The mathematician Pierre de Fermat guessed in 1661 that the bending of a ray of light as it passes from air into water or glass—the refraction that makes possible lenses and mirages—occurs because light behaves like a lifeguard with perfect instincts. It follows the path of least time. (Fermat, reasoning backward, surmised that light must travel more slowly in denser media. Later Newton and his followers thought they had proved the opposite: that light, like sound, travels faster through water than through air. Fermat, with his faith in a principle of simplicity, was right.)
Theology, philosophy, and physics had not yet become so distinct from one another, and scientists found it natural to ask what sort of universe God would make. Even in the quantum era the question had not fully disappeared from the scientific consciousness. Einstein did not hesitate to invoke His name. Yet when Einstein doubted that God played dice with the world, or when he uttered phrases like the one later inscribed in the stone of Fine Hall at Princeton, “The Lord God is subtle, but malicious he is not,” the great man was playing a delicate game with language. He had found a formulation easily understood and imitated by physicists, religious or not. He could express convictions about how the universe ought to be designed without giving offense either to the most literal believers in God or to his most disbelieving professional colleagues, who were happy to read
God
as a poetic shorthand for
whatever laws or principles rule this flux of matter and energy we happen to inhabit
. Einstein’s piety was sincere but neutral, acceptable even to the vehemently antireligious Dirac, of whom Wolfgang Pauli once complained, “Our friend Dirac, too, has a religion, and its guiding principle is ‘There is no God and Dirac is His prophet.’”
Scientists of the seventeenth and eighteenth centuries also had to play a double game, and the stakes were higher. Denying God was still a capital offense, and not just in theory: offenders could be hanged or burned. Scientists made an assault against faith merely by insisting that knowledge—some knowledge—must wait on observation and experiment. It was not so obvious that one category of philosopher should investigate the motion of falling bodies and another the provenance of miracles. On the contrary, Newton and his contemporaries happily constructed scientific proofs of God’s existence or employed God as a premise in a chain of reasoning. Elementary particles must be indivisible, Newton wrote in his
Opticks
, “so very hard as never to wear or break in pieces; no ordinary power being able to divide what God himself made one in the first creation.” Elementary particles cannot be indivisible, René Descartes wrote in his
Principles of Philosophy
:
There cannot be any atoms or parts of matter which are indivisible of their own nature (as certain philosophers have imagined)… . For though God had rendered the particle so small that it was beyond the power of any creature to divide it, He could not deprive Himself of the power of division, because it was absolutely impossible that He should lessen His own omnipotence… .
Could God make atoms so flawed that they could break? Could God make atoms so perfect that they would defy His power to break them? It was only one of the difficulties thrown up by God’s omnipotence, even before relativity placed a precise upper limit on velocity and before quantum mechanics placed a precise upper limit on certainty. The natural philosophers wished to affirm the presence and power of God in every corner of the universe. Yet even more fervently they wished to expose the mechanisms by which planets swerved, bodies fell, and projectiles recoiled in the absence of any divine intervention. No wonder Descartes appended a blanket disclaimer: “At the same time, recalling my insignificance, I affirm nothing, but submit all these opinions to the authority of the Catholic Church, and to the judgment of the more sage; and I wish no one to believe anything I have written, unless he is personally persuaded by the evidence of reason.”
The more competently science performed, the less it needed God. There was no special providence in the fall of a sparrow; just Newton’s second law,
f
=
ma
. Forces, masses, and acceleration were the same everywhere. The Newtonian apple fell from its tree as mechanistically and predictably as the moon fell around the Newtonian earth. Why does the moon follow its curved path? Because its path is the sum of all the tiny paths it takes in successive instants of time; and because at each instant its forward motion is deflected, like the apple, toward the earth. God need not choose the path. Or, having chosen once, in creating a universe with such laws, He need not choose again. A God that does not intervene is a God receding into a distant, harmless background.
Yet even as the eighteenth-century philosopher scientists learned to compute the paths of planets and projectiles by Newton’s methods, a French geometer and
philosophe
, Pierre-Louis Moreau de Maupertuis, discovered a strangely magical new way of seeing such paths. In Maupertuis’s scheme a planet’s path has a logic that cannot be seen from the vantage point of someone merely adding and subtracting the forces at work instant by instant. He and his successors, and especially Joseph Louis Lagrange, showed that the paths of moving objects are always, in a special sense, the most economical. They are the paths that minimize a quantity called
action
—a quantity based on the object’s velocity, its mass, and the space it traverses. No matter what forces are at work, a planet somehow chooses the cheapest, the simplest, the best of all possible paths. It is as if God—a parsimonious God—were after all leaving his stamp.
None of which mattered to Feynman when he encountered Lagrange’s method in the form of a computational shortcut in Introduction to Theoretical Physics. All he knew was that he did not like it. To his friend Welton and to the rest of the class the Lagrange formulation seemed elegant and useful. It let them disregard many of the forces acting in a problem and cut straight through to an answer. It served especially well in freeing them from the right-angle coordinate geometry of the classical reference frame required by Newton’s equations. Any reference frame would do for the Lagrangian technique. Feynman refused to employ it. He said he would not feel he understood the real physics of a system until he had painstakingly isolated and calculated all the forces. The problems got harder and harder as the class advanced through classical mechanics. Balls rolled down inclines, spun in paraboloids—Feynman would resort to ingenious computational tricks like the ones he learned in his mathematics-team days, instead of the seemingly blind, surefire Lagrangian method.