100
I once introduced Bob Geroch for a talk he was giving. It’s useful in these situations to find an interesting anecdote to relate about the speaker, so I Googled around and stumbled on something perfect: a
Star Trek
site featuring a map of our galaxy, prominently displaying something called the “Geroch Wormhole.” (Apparently it connects the Beta Quadrant to the Delta Quadrant, and was the source of a nasty spat with the Romulans.) So I printed a copy of the map on a transparency and showed it during my introduction, to great amusement all around. Later Bob told me he assumed I had made it up myself, and was pleased to hear that his work on wormholes had produced a beneficial practical effect on the outside world. The paper that showed you would have to make a closed timelike curve in order to build a wormhole is Geroch (1967).
101
Hawking (1991). In his conclusion, Hawking also claimed that there was observational evidence that travel backward in time was impossible, based on the fact that we had not been invaded by historians from the future. He was joking (I’m pretty sure). Even if it were possible to construct closed timelike curves from scratch, they could never be used to travel backward to a time
before
the closed curves had been constructed. So there is no observational evidence against the possibility of building a time machine, just evidence that no one has built one yet.
7. RUNNING TIME BACKWARD
102
See O’Connor and Robertson (1999), Rouse Ball (1908). You’ll remember Laplace as one of the people who were speculating about black holes long before general relativity came along.
103
Apparently Napoleon found this quite amusing. He related Laplace’s quip to Joseph Lagrange, another distinguished physicist and mathematician of the time. Lagrange responded with, “Ah, but it is a fine hypothesis; it explains so many things.” Rouse Ball (1908), 427.
104
Laplace (2007).
105
There is no worry that Laplace’s Demon exists out there in the universe, smugly predicting our every move. For one thing, it would have to be as big as the universe, and have a computational power equal to that of the universe itself.
106
Stoppard (1999), 103-4. Valentine, one presumes, is referring to the idea that the phenomenon of
chaos
undermines the idea of determinism. Chaotic dynamics, which is very real, happens when small changes in initial conditions lead to large differences in later evolution. As a practical matter, this makes the future extremely difficult to predict for systems that are chaotic (not everything is)—there will always be some tiny error in our understanding of the present state of a system. I’m not sure that this argument carries much force with respect to Laplace’s Demon. As a
practical
matter, there was no danger that we were ever going to know the entire state of the universe, much less use it to predict the future; this conception was always a matter of principle. And the prospect of chaos doesn’t change that at all.
107
Granted, physicists couldn’t actually live on any of our checkerboards, for essentially anthropic reasons: The setups are too simplistic to allow for the formation and evolution of complex structures that we might identify with intelligent observers. This stifling simplicity can be traced to an absence of interesting “interactions” between the different elements. In the checkerboard worlds we will look at, the entire description consists of just a single kind of thing (such as a vertical or diagonal line) stretching on without alteration. An interesting world is one in which things can persist more or less for an extended period of time, but gradually change via the influence of interactions with other things in the world.
108
This “one moment at a time” business isn’t perfectly precise, as the real world is not (as far as we know) divided up into discrete steps of time. Time is continuous, flowing smoothly from one time to another while going through every possible moment in between. But that’s okay; calculus provides exactly the right set of mathematical tools to make sense of “chugging forward one moment at a time” when time itself is continuous.
109
Note that translations in space and spatial inversions (reflections between left and right) are also perfectly good symmetries. That doesn’t seem as obvious, just from looking at the picture, but that’s only because the states themselves (the patterns of 0’s and 1’s) are not invariant under spatial shifts or reflections.
Lest you think these statements are completely vacuous, there are some symmetries that might have existed, but don’t. We cannot, for example, exchange the roles of time and space. As a general rule, the more symmetries you have, the simpler things become.
110
This whole checkerboard-worlds idea sometimes goes by the name of
cellular automata.
A cellular automaton is just some discrete grid that follows a rule for determining the next row from the state of the previous row. They were first investigated in the 1960s, by John von Neumann, who is also the guy who figured out how entropy works in quantum mechanics. Cellular automata are fascinating for many reasons having little to do with the arrow of time; they can exhibit great complexity and can function as universal computers. See Poundstone (1984) or Shalizi (2009).
Not only are we disrespecting cellular automata by pulling them out only to illustrate a few simple features of time reversal and information conservation, but we are also not speaking the usual language of cellular-automaton cognoscenti. For one thing, computer scientists typically imagine that time runs from top to bottom. That’s crazy; everyone knows that time runs from bottom to top on a diagram. More notably, even though we are speaking as if each square is either in the state “white” or the state “gray,” we just admitted that you have to keep track of more information than that to reliably evolve into the future in what we are calling example B. That’s no problem; it just means that we’re dealing with an automaton where the “cells” can take on more than two different states. One could imagine going beyond white and gray to allow squares to have any of four different colors. But for our current purposes that’s a level of complexity we needn’t explicitly introduce.
111
If the laws of physics are not completely deterministic—if they involve some random, stochastic element—then the “specification” of the future evolution will involve probabilities, rather than certainties. The point is that the state includes all of the information that is required to do as well as we can possibly do, given the laws of physics that we are working with.
112
Sometimes people count relativity as a distinct theory, distinguishing between “classical mechanics” and “relativistic mechanics.” But more often they don’t. It makes sense, for most purposes, to think of relativity as introducing a particular
kind
of classical mechanics, rather than a completely new way of thinking. The way we specify the state of a system, for example, is pretty much the same in relativity as it would be in Newtonian mechanics. Quantum mechanics, on the other hand, really is quite different. So when we deploy the adjective
classical
, it will usually denote a contrast with
quantum
, unless otherwise specified.
113
It is not known, at least to me, whether Newton himself actually played billiards, although the game certainly existed in Britain at the time. Immanuel Kant, on the other hand, is known to have made pocket money as a student playing billiards (as well as cards).
114
So the momentum is not just a number; it’s a vector, typically denoted by a little arrow. A vector can be defined as a magnitude (length) and a direction, or as a combination of sub-vectors (components) pointing along each direction of space. You will hear people speak, for example, of “the momentum along the
x
-direction.”
115
This is a really good question, one that bugged me for years. At various points when one studies classical mechanics, there are times when one hears one’s teachers talk blithely about momenta that are completely inconsistent with the actual trajectory of the system. What is going on?
The problem is that, when we are first introduced to the concept of “momentum,” it is typically
defined
as the mass times the velocity. But somewhere along the line, as you move into more esoteric realms of classical mechanics, that idea ceases to be a definition and becomes something that you can
derive
from the underlying theory. In other words, we start conceiving of the essence of momentum as “some vector (magnitude and direction) defined at each point along the path of the particle,” and then derive equations of motion that insist the momentum will be equal to the mass times the velocity. (This is known as the Hamiltonian approach to dynamics.) That’s the way we are thinking in our discussion of time reversal. The momentum is an independent quantity, part of the state of the system; it is equal to the mass times the velocity only when the laws of physics are being obeyed.
116
David Albert (2000) has put forward a radically different take on all this. He suggests that we should define a “state” to be just the positions of particles,
not
the positions and momenta (which he would call the “dynamical condition”). He justifies this by arguing that states should be logically independent at each moment of time—the states in the future should not depend on the present state, which clearly they do in the way we defined them, as that was the entire point. But by redefining things in this way, Albert is able to live with the most straightforward definition of time-reversal invariance: “A sequence of states played backward in time still obeys the same laws of physics,” without resorting to any arbitrary-sounding transformations along the way. The price he pays is that, although Newtonian mechanics is time-reversal invariant under this definition, almost no other theory is, including classical electromagnetism. Which Albert admits; he claims that the conventional understanding that electromagnetism is invariant under time reversal, handed down from Maxwell to modern textbooks, is simply wrong. As one might expect, this stance invited a fusillade of denunciations; see, for example, Earman (2002), Arntzenius (2004), or Malament (2004).
Most physicists would say that it just doesn’t matter. There’s no such thing as the one true meaning of time-reversal invariance, which is out there in the world waiting for us to capture its essence. There are only various concepts, which we may or may not find useful in thinking about how the world works. Nobody disagrees on how electrons move in the presence of a magnetic field; they just disagree on the words to use when describing that situation. Physicists tend to express bafflement that philosophers care so much about the words. Philosophers, for their part, tend to express exasperation that physicists can use words all the time without knowing what they actually mean.
117
Elementary particles come in the form of “matter particles,” called “fermions,” and “force particles,” called “bosons.” The known bosons include the photon carrying electromagnetism, the gluons carrying the strong nuclear force, and the
W
and
Z
bosons carrying the weak nuclear force. The known fermions fall neatly into two types: six different kinds of “quarks,” which feel the strong force and get bound into composite particles like protons and neutrons, and six different kinds of “leptons,” which do not feel the strong force and fly around freely. These two groups of six are further divided into collections of three particles each; there are three quarks with electric charge +2/3 (the up, charm, and top quarks), three quarks with electric charge -⅓ (the down, strange, and bottom quarks), three leptons with electric charge -1 (the electron, the muon, and the tau), and three leptons with zero charge (the electron neutrino, the muon neutrino, and the tau neutrino). To add to the confusion, every type of quark and lepton has a corresponding antiparticle with the opposite electric charge; there is an anti-up-quark with charge -2/3, and so on.
All of which allows us to be a little more specific about the decay of the neutron (two down quarks and one up): it actually creates a proton (two up quarks and one down), an electron, and an electron
anti
neutrino. It’s important that it’s an antineutrino, because that way the net number of leptons doesn’t change; the electron counts as one lepton, but the antineutrino counts as minus one lepton, so they cancel each other out. Physicists have never observed a process in which the net number of leptons or the net number of quarks changes, although they suspect that such processes must exist. After all, there seem to be a lot more quarks than antiquarks in the real world. (We don’t know the net number of leptons very well, since it’s very hard to detect most neutrinos in the universe, and there could be a lot of antineutrinos out there.)
118
“Easiest” means “lowest in mass,” because it takes more energy to make higher-mass particles, and when you do make them they tend to decay more quickly. The lightest two kinds of quarks are the up (charge +2/3) and the down (charge -1/3), but combining an up with an anti-down does not give a neutral particle, so we have to look at higher-mass quarks. The next heaviest is the strange quark, with charge -1/3, so it can be combined with a down to make a kaon.
119
Angelopoulos et al
.
(1998). A related experiment, measuring time-reversal violation by neutral kaons in a slightly different way, was carried out by the KTeV collaboration at Fermilab, outside Chicago (Alavi-Harati et al
.
2000).