Read Einstein and the Quantum Online
Authors: A. Douglas Stone
FIGURE 7.1.
The epitaph on the tomb of Ludwig Boltzmann, S = k log W, expressing the fundamental equation of entropy, which he had discovered. Image courtesy Daderot.
Imagine shooting gas molecules one by one through a small hole into a box where they bounce around. Mentally divide the box into two equal parts with an imaginary partition. Any part of the box is equally accessible to the molecules (whatever can happen, will), and there is no reason for any molecule to be at one place at a given time versus another (no molecule is special). Suppose for the moment you can actually see the molecules directly. With one molecule in the box you look in periodically, and you find it roughly half the time on the left side of the box and half the time on the right side. Now add a second molecule, wait a
bit, and start looking again. Roughly one-fourth of the time both molecules are on the left, one-fourth of the time both are on the right, and half the time one is on the left and one is on the right. Why is the last case more likely than the first two? Because there are two ways that you can get the last case (molecule 1 on right, molecule 2 on left; molecule 2 on right, molecule 1 on left) but only one way you can get the first two cases. This is just like tossing two coins and finding that one heads and one tails happens roughly twice as often as heads-heads and tails-tails. We can now get fancy and define three “states” for the two-molecule gas: in state one, both are on the left; state two, both on the right; and state three, one on the left and one on the right. For the first two states Boltzmann's
W
= 1 (there is only one way to get these states), but for the third state
2
W
= 2. The entropy of this third state is then larger than that of the other two states, according to Boltzmann's formula (we need not delve into the mysterious properties of the logarithm function to reach this conclusion). In actuality a physicist would specify the states in more detail than just which half of the box the molecules are in, but the underlying concept and method is exactly the same.
Now imagine we have a few trillion trillion molecules in the box (as indeed we usually do). There is still only one way to have all of them on one side; however,
W
, the number of ways of having about half on the right side and half on the left side, is unspeakably large. We literally have no words, no analogies, for numbers of this magnitude. As a feeble attempt, imagine the following: take all the atoms in the universe and, in one second, clone each of them, so as to create a second “universe.” Now repeat this every second, creating 4, 8, 16, and so on “universes.” Do this every second for the
entire age
of our current universe. Add up all of the atoms in all these universes and you will arrive at a number that is incredibly big, all right,
but still
this number is negligibly small compared with the number of states of high entropy of
one
liter
of gas. These high entropy states, in which the gas molecules are roughly equally distributed in each half, have (not surprisingly) enormously higher entropy than the states in which the molecules are all or mostly on one side of the box.
Suppose that we go to a lot of trouble and evacuate the gas from the box, put an airtight partition in the middle, and put the gas back in on the left side, so that we set the system up in this very improbable (low entropy) state. From Maxwell we know that the gas molecules are flying around at 1,000 mph, colliding with one another and the walls and thus creating pressure on the walls. If we then remove the partition, the gas will rapidly fill the entire box again, approximately equally in each half. The entropy of the system will have increased. After the molecules spread themselves out roughly equally in the box, the molecules will still be colliding and moving around, but on average there will be roughly equal numbers on each side of the box. Intuitively this situation is the most disordered state (you don't need to make special efforts to achieve it), and according to Boltzmann's principle, this is the state of maximum entropy. This is the atomic explanation for the Second Law of thermodynamics, that entropy always increases or stays the same. Whenever we try to generate useful work from heat, we are essentially trying to create order out of this molecular chaos, and we are fighting against the laws of probability.
Consider further the previous example, where we have opened a partition and let the gas fill the entire box instead of just half the box. Now it
is
possible that if we wait long enough, all the collisions could work out just right, and all the gas molecules could reconvene on the left side. Is it worth waiting for this to occur? Not really. One can easily calculate that if all the states are equally likely and if we have only
forty
gas molecules in the box, it would take about the age of the universe for this to happen. With a trillion trillion molecules in the box? As they say in New York: fuggedaboutit.
This was the subtle point that Einstein missed in trying to prove that the Second Law of thermodynamics, the increase of entropy, is an absolute law. It isn't absolute; entropy is
allowed
to decrease. Just don't bet on it.
In fact, this overwhelmingly probable increase of entropy is how we determine the direction of time. Imagine that the gas in our box is colored and hence visible as it expands; if we saw a movie of the gas contracting back into half of the box, we would immediately assume that the movie was being run backward. Because the arrow of time is so fundamental, it was natural for physicists to assume that the increase of entropy was an absolute law of nature and not just a very, very, very, very, very ⦠likely occurrence. When the young Einstein made this mistake, he was in good company; Boltzmann also got this wrong until his critics pointed it out to him. However, the canny Scot, Maxwell, was not fooled, and described the situation with colorful imagery: “
the Second Law of Thermodynamics has
the same degree of truth as the statement that if you throw a tumblerful of water into the sea, you cannot get the same tumblerful of water out again.”
Maxwell invented an imaginary creature, dubbed “Maxwell's demon,” to illustrate this point further. His demons were “
lively beings incapable of doing work
” (i.e., of adding energy to the gas). He imagined these miniature sprites hovering in the gas, which is uniformly distributed in the box, but now a partition is added to the box, in which the demon has cleverly fashioned a frictionless trapdoor. Whenever the demon sees a gas molecule coming at him from the right to left with high velocity, he lets it through, then closes the trapdoor before any molecules can escape from the left to right. In this way over time he groups the faster molecules on the left and the slower ones on the right. But for a gas the temperature is proportional to the average energy, so by doing this the demon has heated the left side and cooled the right side,
without putting any energy in
. In other words the demon has created a refrigerator on the right (and a heater on the left), neither of which require any fuel (Einstein definitely would have rejected this patent). And why did Maxwell create his demons? Not as a serious proposal for an invention. Instead his intention was “
to show that the Second Law
of Thermodynamics has only a statistical certainty.” Maxwell's demons, spawned around 1870, did not fancy the trip across the Channel, and so the true meaning of the Second Law was not understood in Europe for several more decades.
While Einstein did not recognize the “demonic” exception to the Second Law when he was reinventing Gibbs's statistical mechanics in 1903â4, he did very much focus on what he called “Boltzmann's principle,” the mathematical epitaph,
S
=
k
log
W
, mentioned above. In his third paper on statistical mechanics in 1904, he states, “
I derive an expression for the entropy
of a system, which is completely analogous to the expression found by Boltzmann for ideal gases [
S
=
k
log
W
]
and assumed by Planck in his theory of radiation
” (italics added). Later in that same paper he explicitly applies his results to Planck's thermal radiation law, although in a manner that doesn't yet refer to the quantum concept. This made Einstein the first physicist to extend the use of Planck's law and to accept that statistical mechanics, which had previously been used only to describe gases, could also explain the properties of electromagnetic radiation. Radiation at this point was conceived to be a purely wave phenomenon, having nothing in common with the aggregate of particles (molecules) that make up a gas. Einstein now analyzed thermal radiation using his statistical methods, and he was beginning to see the problems with Planck's “desperate” solution.
So what
had
Planck actually assumed about radiation, and how had he used Boltzmann's principle to justify the formula that he had initially guessed by fitting the data? Planck had not been as bold as Einstein; he did not apply statistical mechanics to radiation but rather to the matter that exchanged energy with radiation. The Planck radiation law is, strictly speaking, only completely correct for what physicists call a “blackbody.” We all learn in school that the color white is a mixture of all colors and that black is the absence of color. A perfectly black body absorbs all light that falls upon it; hence no light of any color is directly reflected from it, and it appears black. In contrast, a surface that looks blue to us absorbs most of the red and yellow light incident on it and reflects the blue to our eye. But does the black object actually emit no light? Well, yes and no. It doesn't emit any
visible
light, but it does send out a lot of electromagnetic radiation; however, as we learned earlier, if the object is at room temperature, the radiation is mainly at infrared wavelengths, which we can't see. As already mentioned, the radiation law is precisely the rule for how much EM radiation of a given wavelength a blackbody emits at a given temperature.
To test this ideal behavior, physicists had to find a perfectly black body, not just for visible radiation but for all possible wavelengths. Unfortunately all real materials reflect EM radiation at
some
wavelengths, so soot, oil, burnt toast, and the other obvious candidates don't actually do the job. So the experimenters came up with a clever idea: instead of using the surface of a material, they would use the inside of a kind of furnace with a small hole. Any radiation that went in through the hole would bounce around, being reflected many times, but eventually it would be absorbed before escaping. Thus any light coming out of the hole must have been emitted from the walls and would be representative of a perfect blackbody.
It was this kind of ideal black box or “radiation cavity” that Planck analyzed between 1895 and 1900. And one of his first ideas was to transfer his ignorance of the blackbody law from radiation to matter. He assumed that the walls of the cavity were made of molecules that would vibrate at a certain frequency in response to the EM radiation that fell upon them. Then by a clever argument he related the density of the energy of EM
radiation
at a given frequency (his goal), to the average energy of the vibrating
molecules
at the same frequency.
3
He thus no longer had to deal with Maxwell's equations describing the electromagnetic waves; he could assume that Newton's laws held for the molecular vibrations, and he could use statistical mechanics. However, instead of doing the obvious thing and calculating the average energy of a molecule from statistical mechanics à la Boltzmann, he chose to find the
entropy
of the molecules.
He did this for a strange historical reason. When he began studying blackbody radiation in 1895, he hoped to find the missing principle that would restore the perfection of the Second Law and make the increase of entropy and hence the arrow of time an absolute principle of nature. While he had been convinced that the equations of matter
allowed for entropy to decrease (although very rarely!), he hoped that those of Maxwell would prevent this from ever happening. This turned out to be a vain hope, as Boltzmann himself was able to demonstrate to Planck. However, having committed himself to the study of the entropy of radiation, and since the actual radiation law was still not definitively known, Planck continued his investigations. He knew that if he could find the average entropy of thermal radiation, it was related by straightforward mathematical steps to the average energy density, and hence to the correct radiation law.
At first Planck made an incorrect argument, which did not rely on Boltzmann's principle. This led him to what was then called the Planck-Wien
4
radiation law, and the embarrassing retraction in October 1900 after that law was ruled out by the experiments of his friends Rubens and Kurlbaum. At that point, guided by their experimental results and his mathematical intuition, as we saw earlier, he guessed the right form of the radiation law. Now, working
backward
from his apparently correct guess for the energy density of the radiation, he could figure out what the corresponding mathematical expression for the entropy of the radiation
had to be
. So this distinguished physicist was in a position oddly familiar to novice physics students, who might find the correct answer to a problem listed in the solutions at the back of their textbook but can't quite figure out how to get that answer based on the principles they are supposed to have learned.
Faced with this quandary, for the first time in his career Planck resorted to Boltzmann's principle. By accepting and using that principle (the formula
S
=
k
log
W
), he now had an approach to justify his empirical guess from the fundamental laws of statistical physics. What he needed to do was to count the possible states of molecular vibration,
W
, and show that when plugged into Boltzmann's formula, it gave the answer that he “knew” was correct.