So Carnot appreciated that engines did something that could not be undone. It was Clausius, in 1850, who understood that this reflected a law of nature. He formulated his law as “heat does not spontaneously flow from cold bodies to warm ones.” Fill a balloon with hot water and immerse it in cold water. Everyone knows that the temperatures will tend to average out: The water in the balloon will cool down as the surrounding liquid warms up. The opposite never happens. Physical systems evolve toward a state of
equilibrium
—a quiescent configuration that is as uniform as possible, with equal temperatures in all components. From this insight, Clausius was able to re-derive Carnot’s results concerning steam engines.
So what does Clausius’ law (heat never flows spontaneously from colder bodies to hotter ones) have to do with the Second Law (entropy never spontaneously decreases)? The answer is, they are the same law. In 1865 Clausius managed to reformulate his original maxim in terms of a new quantity, which he called the “entropy.” Take an object that is gradually cooling down—emitting heat into its surroundings. As this process happens, consider at every moment the amount of heat being lost, and divide it by the temperature of the object. The entropy is then the accumulated amount of this quantity (the heat lost divided by the temperature) over the course of the entire process. Clausius showed that the tendency of heat to flow from hot objects to cold ones was precisely equivalent to the claim that the entropy of a closed system would only ever go up, never go down. An equilibrium configuration is simply one in which the entropy has reached its maximum value, and has nowhere else to go; all the objects in contact are at the same temperature.
If that seems a bit abstract, there is a simple way of summing up this view of entropy: It measures the
uselessness
of a certain amount of energy.
27
There is energy in a gallon of gasoline, and it’s useful—we can put it to work. The process of burning that gasoline to run an engine doesn’t change the total amount of energy; as long as we keep careful track of what happens, energy is always conserved.
28
But along the way, that energy becomes increasingly useless. It turns into heat and noise, as well as the motion of the vehicle powered by that engine, but even that motion eventually slows down due to friction. And as energy transforms from useful to useless, its entropy increases all the while.
The Second Law doesn’t imply that the entropy of a system can never decrease. We could invent a machine that separated out the milk from a cup of coffee, for example. The trick, though, is that we can only decrease the entropy of one thing by creating more entropy elsewhere. We human beings, and the machines that we might use to rearrange the milk and coffee, and the food and fuel each consume—all of these also have entropy, which will inevitably increase along the way. Physicists draw a distinction between
open systems
—objects that interact significantly with the outside world, exchanging entropy and energy—and
closed systems
—objects that are essentially isolated from external influences. In an open system, like the coffee and milk we put into our machine, entropy can certainly decrease. But in a closed system—say, the total system of coffee plus milk plus machine plus human operators plus fuel and so on—the entropy will always increase, or at best stay constant.
THE RISE OF ATOMS
The great insights into thermodynamics of Carnot, Clausius, and their colleagues all took place within a “phenomenological” framework. They knew the big picture but not the underlying mechanisms. In particular, they didn’t know about atoms, so they didn’t think of temperature and energy and entropy as properties of some microscopic substrate; they thought of each of them as real things, in and of themselves. It was common in those days to think of energy in particular as a form of fluid, which could flow from one body to another. The energy-fluid even had a name: “caloric.” And this level of understanding was perfectly adequate to formulating the laws of thermodynamics.
But over the course of the nineteenth century, physicists gradually became convinced that the many substances we find in the world can all be understood as different arrangements of a fixed number of elementary constituents, known as “atoms.” (The physicists actually lagged behind the chemists in their acceptance of atomic theory.) It’s an old idea, dating back to Democritus and other ancient Greeks, but it began to catch on in the nineteenth century for a simple reason: The existence of atoms could explain many observed properties of chemical reactions, which otherwise were simply asserted. Scientists like it when a single simple idea can explain a wide variety of observed phenomena.
These days it is elementary particles such as quarks and leptons that play the role of Democritus’s atoms, but the idea is the same. What a modern scientist calls an “atom” is the smallest possible unit of matter that still counts as a distinct chemical element, such as carbon or nitrogen. But we now understand that such atoms are not indivisible; they consist of electrons orbiting the atomic nucleus, and the nucleus is made of protons and neutrons, which in turn are made of different combinations of quarks. The search for rules obeyed by these elementary building blocks of matter is often called “fundamental” physics, although “elementary” physics would be more accurate (and arguably less self-aggrandizing). Henceforth, I’ll use
atoms
in the established nineteenth-century sense of chemical elements, not the ancient Greek sense of elementary particles.
The fundamental laws of physics have a fascinating feature: Despite the fact that they govern the behavior of all the matter in the universe, you don’t need to know them to get through your everyday life. Indeed, you would be hard-pressed to discover them, merely on the basis of your immediate experiences. That’s because very large collections of particles obey distinct, autonomous rules of behavior, which don’t really depend on the smaller structures underneath. The underlying rules are referred to as “microscopic” or simply “fundamental,” while the separate rules that apply only to large systems are referred to as “macroscopic” or “emergent.” The behavior of temperature and heat and so forth can certainly be understood in terms of atoms: That’s the subject known as “statistical mechanics.” But it can equally well be understood without knowing anything whatsoever about atoms: That’s the phenomenological approach we’ve been discussing, known as “thermodynamics.” It is a common occurrence in physics that in complex, macroscopic systems, regular patterns emerge dynamically from underlying microscopic rules. Despite the way it is sometimes portrayed, there is no competition between fundamental physics and the study of emergent phenomena; both are fascinating and crucially important to our understanding of nature.
One of the first physicists to advocate atomic theory was a Scotsman, James Clerk Maxwell, who was also responsible for the final formulation of the modern theory of electricity and magnetism. Maxwell, along with Boltzmann in Austria (and following in the footsteps of numerous others), used the idea of atoms to explain the behavior of gases, according to what was known as “kinetic theory.” Maxwell and Boltzmann were able to figure out that the atoms in a gas in a container, fixed at some temperature, should have a certain distribution of velocities—this many would be moving fast, that many would be moving slowly, and so on. These atoms would naturally keep banging against the walls of the container, exerting a tiny force each time they did so. And the accumulated impact of those tiny forces has a name: It is simply the pressure of the gas. In this way, kinetic theory explained features of gases in terms of simpler rules.
ENTROPY AND DISORDER
But the great triumph of kinetic theory was its use by Boltzmann in formulating a microscopic understanding of entropy. Boltzmann realized that when we look at some macroscopic system, we certainly don’t keep track of the exact properties of every single atom. If we have a glass of water in front of us, and someone sneaks in and (say) switches some of the water molecules around without changing the overall temperature and density and so on, we would never notice. There are many different arrangements of particular atoms that are
indistinguishable
from our macroscopic perspective. And then he noticed that low-entropy objects are more delicate with respect to such rearrangements. If you have an egg, and start exchanging bits of the yolk with bits of the egg white, pretty soon you will notice. The situations that we characterize as “low-entropy” seem to be easily disturbed by rearranging the atoms within them, while “high-entropy” ones are more robust.
Figure 6:
Ludwig Boltzmann’s grave in the Zentralfriedhof, Vienna. The inscribed equation,
S = k
log
W
, is his formula for entropy in terms of the number of ways you can rearrange microscopic components of a system without changing its macroscopic appearance. (See Chapter Eight for details.)
So Boltzmann took the concept of entropy, which had been defined by Clausius and others as a measure of the uselessness of energy, and redefined it in terms of atoms:
Entropy is a measure of the number of particular microscopic arrangements of atoms that appear indistinguishable from a macroscopic perspective.
29
It would be difficult to overemphasize the importance of this insight. Before Boltzmann, entropy was a phenomenological thermodynamic concept, which followed its own rules (such as the Second Law). After Boltzmann, the behavior of entropy could be
derived
from deeper underlying principles. In particular, it suddenly makes perfect sense why entropy tends to increase:
In an isolated system entropy tends to increase, because there are more ways to be high entropy than to be low entropy.
At least, that formulation sounds like it makes perfect sense. In fact, it sneaks in a crucial assumption: that we start with a system that has a low entropy. If we start with a system that has a high entropy, we’ll be in equilibrium—nothing will happen at all. That word
start
sneaks in an asymmetry in time, by privileging earlier times over later ones. And this line of reasoning takes us all the way back to the low entropy of the Big Bang. For whatever reason, of the many ways we could arrange the constituents of the universe, at early times they were in a very special, lo w-entropy configuration.
This caveat aside, there is no question that Boltzmann’s formulation of the concept of entropy represented a great leap forward in our understanding of the arrow of time. This increase in understanding, however, came at a cost. Before Boltzmann, the Second Law was absolute—an ironclad law of nature. But the definition of entropy in terms of atoms comes with a stark implication: entropy doesn’t necessarily increase, even in a closed system; it is simply
likely
to increase. (Overwhelmingly likely, as we shall see, but still.) Given a box of gas evenly distributed in a high-entropy state, if we wait long enough, the random motion of the atoms will eventually lead them all to be on one side of the box, just for a moment—a “statistical fluctuation.” When you run the numbers, it turns out that the time you would have to wait before expecting to see such a fluctuation is much larger than the age of the universe. It’s not something we have to worry about, as a practical matter. But it’s there.
Some people didn’t like that. They wanted the Second Law of Thermodynamics, of all things, to be utterly inviolate, not just something that holds true most of the time. Boltzmann’s suggestion met with a great deal of controversy, but these days it is universally accepted.
ENTROPY AND LIFE
This is all fascinating stuff, at least to physicists. But the ramifications of these ideas go far beyond steam engines and cups of coffee. The arrow of time manifests itself in many different ways—our bodies change as we get older, we remember the past but not the future, effects always follow causes. It turns out that
all
of these phenomena can be traced back to the Second Law. Entropy, quite literally, makes life possible.
The major source of energy for life on Earth is light from the Sun. As Clausius taught us, heat naturally flows from a hot object (the Sun) to a cooler object (the Earth). But if that were the end of the story, before too long the two objects would come into equilibrium with each other—they would attain the same temperature. In fact, that is just what would happen if the Sun filled our entire sky, rather than describing a disk about one degree across. The result would be an unhappy world indeed. It would be completely inhospitable to the existence of life—not simply because the temperature was high, but because it would be
static
. Nothing would ever change in such an equilibrium world.