150
Davis (1985, 11) writes: “I will lay out four rules, but each is really only a special application of the great principle of causal order:
after cannot cause before
. . . there is no way to change the past . . . one-way arrows flow with time.”
151
There are a number of references that go into the story of Maxwell’s Demon in greater detail than we will here. Leff and Rex (2003) collect a number of the original papers. Von Baeyer (1998) uses the Demon as a theme to trace the history of thermodynamics; Seife (2006) gives an excellent introduction to information theory and its role in unraveling this puzzle. Bennett and Landauer themselves wrote about their work in
Scientific American
(Bennett and Landauer, 1985; Bennett, 1987).
152
This scenario can be elaborated on further. Imagine that the box was embedded in a bath of thermal gas at some temperature
T
, and that the walls of the box conducted heat, so that the molecule inside was kept in thermal equilibrium with the gas outside. If we could continually renew our information about which side of the box the molecule was on, we could keep extracting energy from it, by cleverly inserting the piston on the appropriate side; after the molecule lost energy to the piston, it would gain the energy back from the thermal bath. What we’ve done is to construct a perpetual motion machine, powered only by our hypothetical limitless supply of information. (Which drives home the fact that information never just comes for free.) Szilárd could even quantify precisely how much energy could be extracted from a single bit of information:
kT
log 2, where
k
is Boltzmann’s constant.
153
It’s interesting how, just as much of the pioneering work on thermodynamics in the early nineteenth century was carried out by practical-minded folks who were interested in building better steam engines, much of the pioneering work on information theory in the twentieth century has been carried out by practical-minded folks who were interested in building better communications systems and computers.
154
We can go further than this. Just as Gibbs came up with a definition of entropy that referred to the probability that a system was in various different states, we can define the “information entropy” of a space of possible messages in terms of the probability that the message takes various forms. The formulas for the Gibbs entropy and the information entropy turn out to be identical, although the symbols in them have slightly different meanings.
155
For recent overviews, see Morange (2008) or Regis (2009).
156
The argument that follows comes from Bunn (2009), which was inspired by Styer (2008). See also Lineweaver and Egan (2008) for details and additional arguments.
157
Crick (1990).
158
Schrödinger (1944), 69.
159
From Being to Becoming
is the title of a popular book (1980) by Belgian Nobel Laureate Ilya Prigogine, who helped pioneer the study of “dissipative structures” and self-organizing systems in statistical mechanics. See also Prigogine (1955), Kauffman (1993), and Avery (2003).
160
A good recent book is Nelson (2007).
161
He would have been even more wary in modern times; a Google search on “free energy” returns a lot of links to perpetual-motion schemes, along with some resources on clean energy.
162
Informally speaking, the concepts of “useful” and “useless” energy certainly predate Gibbs; his contribution was to attach specific formulas to the ideas, which were later elaborated on by German physicist Hermann von Helmholtz. In particular, what we are calling the “useless” energy is (in Helmholtz’s formulation) simply the temperature of the body times its entropy. The free energy is then the total internal energy of the body minus that quantity.
163
In the 1950s, Claude Shannon built “The Ultimate Machine,” based on an idea by Marvin Minsky. In its resting state, the machine looked like a box with a single switch on one face. If you were to flip the switch, the box would buzz loudly. Then the lid would open and a hand would reach out, flipping the switch back to its original position, and retreat back into the box, which became quiet once more. One possible moral of which is: Persistence can be a good in its own right.
164
Specifically, more massive organisms—which typically have more moving parts and are correspondingly more complex—consume free energy at a higher rate per unit mass than less massive organisms. See, for example, Chaisson (2001).
165
This and other quantitative measures of complexity are associated with the work of Andrey Kolmogorov, Ray Solomonoff, and Gregory Chaitin. For a discussion, see, for example, Gell-Mann (1994).
166
For some thoughts on this particular question, see Dyson (1979) or Adams and Laughlin (1999).
10. RECURRENT NIGHTMARES
167
Nietzsche (2001), 194. What is it with all the demons, anyway? Between Pascal’s Demon, Maxwell’s Demon, and Nietzsche’s Demon, it’s beginning to look more like Dante’s
Inferno
than a science book around here. Earlier in
The Gay Science
(189), Nietzsche touches on physics explicitly, although in a somewhat different context: “We, however, want to
become who we are
—human beings who are new, unique, incomparable, who give themselves laws, who create themselves! To that end we must become the best students and discoverers of everything lawful and necessary in the world: we must become
physicists
in order to be creators in this sense—while hitherto all valuations and ideals have been built on
ignorance
of physics or in
contradiction
to it. So, long live physics! And even more, long live what compels us to it—our honesty!”
168
Note that, if each cycle were truly a perfect copy of the previous cycles, you would have no memory of having experienced any of the earlier versions (since you didn’t have such a memory before, and it’s a perfect copy). It’s not clear how different such a scenario would be than if the cycle occurred only once.
169
For more of the story, see Galison (2003). Poincaré’s paper is (1890).
170
Another subtlety is that, while the system is guaranteed to return to its starting configuration, it is not guaranteed to attain
every
possible configuration. The idea that a sufficiently complicated system does visit every possible state is equivalent to the idea that the system is ergodic, which we discussed in Chapter Eight in the context of justifying Boltzmann’s approach to statistical mechanics. It’s true for some systems, but not for all systems, and not even for all interesting ones.
171
It’s my book, so Pluto still counts.
172
Roughly speaking, the recurrence time is given by the exponential of the maximum entropy of the system, in units of the typical time it takes for the system to evolve from one state to the next. (We are assuming some fixed definition of when two states are sufficiently different as to count as distinguishable.) Remember that the entropy is the logarithm of the number of states, and an exponential undoes a logarithm; in other words, the recurrence time is simply proportional to the total number of possible states the system can be in, which makes perfect sense if the system spends roughly equal amounts of time in each allowed state.
173
Poincaré (1893).
174
Zermelo (1896a).
175
Boltzmann (1896).
176
Zermelo (1896b); Boltzmann (1897).
177
Boltzmann (1897).
178
“At least” three ways, because the human imagination is pretty clever. But there aren’t that many choices. Another one would be that the underlying laws of physics are intrinsically irreversible.
179
Boltzmann (1896).
180
We’re imagining that the spirit of the recurrence theorem is valid, not the letter of it. The proof of the recurrence theorem requires that the motions of particles be bounded—perhaps because they are planets moving in closed orbits around the Sun, or because they are molecules confined to a box of gas. Neither case really applies to the universe, nor is anyone suggesting that it might. If the universe consisted of a finite number of particles moving in an infinite space, we would expect some of them to simply move away forever, and recurrences would not happen. However, if there are an
infinite
number of particles in an infinite space, we can have a fixed finite average
density
—the number of particles per (for example) cubic light-year. In that case, fluctuations of the form illustrated here are sure to occur, which look for all the world like Poincaré’s recurrences.
181
Boltzmann (1897). He made a very similar suggestion in a slightly earlier paper (1895), where he attributed it to his “old assistant, Dr. Schuetz.” It is unclear whether this attribution should be interpreted as a generous sharing of credit, or a precautionary laying of blame.
182
Note that Boltzmann’s reasoning actually goes past the straightforward implications of the recurrence theorem. The crucial point now is not that any particular low-entropy starting state will be repeated infinitely often in the future—although that’s true—but that anomalously low-entropy states of all sorts will eventually appear as random fluctuations.
183
Epicurus is associated with Epicureanism, a philosophical precursor to utilitarianism. In the popular imagination, “epicurean” conjures up visions of hedonism and sensual pleasure, especially where food and drink are concerned; while Epicurus himself took pleasure as the ultimate good, his notion of “pleasure” was closer to “curling up with a good book ” than “partying late into the night” or “gorging yourself to excess.”
Much of the original writing by the Atomists has been lost; Epicurus, in particular, wrote a thirty-seven-volume treatise on nature, but his only surviving writings are three letters reproduced in Diogenes Laertius’s
Lives of the Philosophers
. The atheistic implications of their materialist approach to philosophy were not always popular with later generations.
184
Lucretius (1995), 53.
185
A careful quantitative understanding of the likelihood of different kinds of fluctuations was achieved only relatively recently, in the form of something called the “fluctuation theorem” (Evans and Searles, 2002). But the basic idea has been understood for a long time. The probability that the entropy of a system will take a random jump downward is proportional to the exponential of minus the change in entropy. That’s a fancy way of saying that small fluctuations are common, and large fluctuations are extremely rare.
186
It’s tempting to think,
But it’s incredibly unlikely for a featureless collection of gas molecules in equilibrium to fluctuate into a pumpkin pie, while it’s not that hard to imagine a pie being created in a world with a baker and so forth.
True enough. But as hard as it is to fluctuate a pie all by itself, it’s much more difficult to fluctuate a baker and a pumpkin patch. Most pies that come to being
under these assumptions
—an eternal universe, fluctuating around equilibrium—will be all by themselves in the universe. The fact that the world with which we are familiar doesn’t seem to work that way is evidence that something about these assumptions is not right.
187
Eddington (1931). Note that what really matters here is not so much the likelihood of significant dips in the entropy of the entire universe, but the conditional question: “Given that one subset of the universe has experienced a dip in entropy, what should we expect of the rest of the universe?” As long as the subset in question is coupled weakly to everything else, the answer is what you would expect, and what Eddington indicated: The entropy of the rest of the universe is likely to be as high as ever. For discussions (at a highly mathematical level) in the context of classical statistical mechanics, see the books by Dembo and Zeitouni (1998) or Ellis (2005). For related issues in the context of quantum mechanics, see Linden et al
.
(2008).
188
Albrecht and Sorbo (2004).
189
Feynman, Leighton, and Sands (1970).
190
This discussion draws from Hartle and Srednicki (2007). See also Olum (2002), Neal (2006), Page (2008), Garriga and Vilenkin (2008), and Bousso, Freivogel, and Yang (2008).
191
There are a couple of closely related questions that arise when we start comparing different kinds of observers in a very large universe. One is the “simulation argument” (Bostrom 2003), which says that it should be very easy for an advanced civilization to make a powerful computer that simulates a huge number of intelligent beings, and therefore we are most likely to be living in a computer simulation. Another is the “doomsday argument” (Leslie, 1990; Gott, 1993), which says that the human race is unlikely to last for a very long time, because if it did, those of us (now) who live in the early days of human civilization would be very atypical observers. These are very provocative arguments; their persuasive value is left up to the judgment of the reader.