Many Worlds in One: The Search for Other Universes (17 page)

BOOK: Many Worlds in One: The Search for Other Universes
6.75Mb size Format: txt, pdf, ePub
ads
As I already mentioned in earlier chapters, it came as a complete shock to most physicists when evidence for a nonzero cosmological constant was first announced. The evidence was based on the study of distant supernova explosions of a special kind—type Ia supernovae.
These gigantic explosions are believed to occur in binary stellar systems, consisting of an active star and a white dwarf—a compact remnant of a star that ran out of its nuclear fuel. A solitary white dwarf will slowly fade away, but if it has a companion, it may end its life with fireworks. Some of the gas ejected from the companion star could be captured by the white dwarf, so the mass of the dwarf would steadily grow. There is, however, a maximum mass that a white dwarf can have—the Chandrasekhar limit—beyond which gravity causes it to collapse, igniting a tremendous thermonuclear explosion. This is what we see as a type Ia supernova.
A supernova appears as a brilliant spot in the sky and, at the peak of its brightness, can be as luminous as 4 billion suns. In a galaxy like ours, one type Ia supernova explodes once in about 300 years. So, in order to find a few dozen such explosions, astronomers had to monitor thousands of galaxies over a period of several years. But the effort was worth it. Type Ia supernovae come very close to realizing the long-standing astronomer’s dream of finding a
standard candle
—a class of astronomical objects that have exactly the same power. Distances to standard candles can be determined from their apparent brightness—in the same way as we could determine the distance to a 100-watt lightbulb from how bright it appears. Without such magic objects, distance determination is notoriously difficult in astronomy.
Type Ia supernovae have nearly the same power because the exploding white dwarfs have the same mass, equal to the Chandrasekhar limit.
6
Knowing the power, we can find the distance to the supernova, and once we know the distance, it is easy to find the time of the explosion—by just counting back the time it took light to traverse that distance. In addition, the reddening, or Doppler shift, of the light can be used to find how fast the universe was expanding at that time.
7
Thus, by analyzing the light from distant supernovae, we can uncover the history of cosmic expansion.
This technique was perfected by two competing groups of astronomers,
one called the Supernova Cosmology Project and the other the High-Redshift Supernova Search Team. The two groups raced to determine the rate at which cosmic expansion was slowed down by gravity. But this was not what they found. In the winter of 1998, the High-Redshift team announced they had convincing evidence that instead of slowing down, the expansion of the universe had been speeding up for the last 5 billion years or so. It took some courage to come out with this claim, since an accelerated expansion was a telltale sign of a cosmological constant. When asked how he felt about this development, one of the leaders of the team, Brian Schmidt, said that his reaction was “somewhere between amazement and horror.”
8
A few months later, the Supernova Cosmology Project team announced very similar conclusions. As the leader of the team, Saul Perlmutter, put it, the results of the two groups were “in violent agreement.”
The discovery sent shock waves through the physics community. Some people simply refused to believe the result. Slava Mukhanov
ay
offered me a bet that the evidence for a cosmological constant would soon evaporate. The bet was for a bottle of Bordeaux. When Mukhanov eventually produced the wine, we enjoyed it together; apparently, the presence of the cosmological constant did not affect the bouquet.
There were also suggestions that the brightness of a supernova could be affected by factors other than the distance. For example, if light from a supernova were scattered by dust particles in the intergalactic space, the supernova would look dimmer, and we would be fooled into thinking that it was farther away. These doubts were dispelled a few years later, when Adam Riess of the Space Telescope Science Institute in Baltimore analyzed the most distant supernova known at that time, SN 1997ff. If the dimming were due to obscuration by dust, the effect would only increase with the distance. But this supernova was
brighter
, not dimmer, than it would be in a “coasting” universe that neither accelerates nor decelerates. The explanation was that it exploded at 3 billion years A.B., during the epoch when the vacuum energy was still subdominant and the accelerated expansion had not yet begun.
As the evidence for cosmic acceleration was getting stronger, cosmologists were quick to realize that from certain points of view, the return of the
cosmological constant was not such a bad thing. First, as we discussed in Chapter 9, it provided the missing mass density to make the total density of the universe equal to the critical density. And second, it resolved the nagging cosmic age discrepancy. The age of the universe calculated without a cosmological constant turns out to be smaller than the age of the oldest stars. Now, if the cosmic expansion accelerates, then it was slower in the past, so it took the universe longer to expand to its present size.
az
The cosmological constant, therefore, makes the universe older, and the age discrepancy is removed.
9
Thus, only a few years after cosmic acceleration was discovered, it was hard to see how we could ever live without it. The debate now shifted to understanding what it actually meant.
The observed value of the vacuum energy density, about three times the average matter density, was in the ballpark of values predicted three years earlier from the principle of mediocrity. Normally, physicists regard a successful prediction as strong evidence for the theory. But in this case they were not in a hurry to give anthropic arguments any credit. In the years following the discovery, there was a tremendous effort by many physicists to explain the accelerated expansion without relying on the anthropics. The most popular of these attempts was the
quintessence
model, developed by Paul Steinhardt and his collaborators.
10
The idea of quintessence is that the vacuum energy is not a constant, but is gradually decreasing with the expansion of the universe. It is so small now because the universe is so old. More specifically, quintessence is a scalar field whose energy landscape looks as if it were designed for downhill skiing (
Figure 14.3
). The field is assumed to start high up the hill in the early universe, but by now it has rolled down to low elevations—which means low energy densities of the vacuum.
The problem with this model is that it does not resolve the coincidence puzzle: why the present energy density of the vacuum happens to be comparable
to the matter density (see Chapter 12). The shape of the energy hill can be adjusted for this to happen, but that would amount to simply fitting the data, instead of explaining it.
11
Figure 14.3
.
Quintessence energy landscape.
On the other hand, the anthropic approach naturally resolves the puzzle. According to the principle of mediocrity, most observers live in regions where the cosmological constant caught up with the density of matter at about the epoch of galaxy formation. The assembly of giant spiral galaxies like ours was completed in the relatively recent cosmological past, at several billion years A.B.
12
Since then, the density of matter has fallen below that of the vacuum, but not by much (by a factor of 3 or so in our region).
13
Despite numerous attempts, no other plausible explanations for the coincidence have been suggested. Gradually, the collective psyche of the physicists was getting used to the thought that the anthropic picture might be here to stay.
The reluctance of many physicists to embrace the anthropic explanation is easy to understand. The standard of accuracy in physics is very high, you might say unlimited. A striking example of that is the calculation of the magnetic moment of the electron. An electron can be pictured as a tiny magnet. Its magnetic moment, characterizing the strength of the magnet, was first calculated by Paul Dirac in the 1930s. The result agreed very well with experiments, but physicists soon realized that there was a small correction to Dirac’s value, due to quantum fluctuations of the vacuum. What
followed was a race between particle theorists doing more and more accurate calculations and experimentalists measuring the magnetic moment with higher and higher precision. The most recent measurement result for the correction factor is 1.001159652188, with some uncertainty in the last digit. The theoretical calculation is even more accurate. Remarkably, the agreement between the two is up to the eleventh decimal point. In fact, failure to agree at this level would be a cause for alarm, since any disagreement, even in the eleventh decimal point, would indicate some gap in our understanding of the electron.
Anthropic predictions are not like that. The best we can hope for is to calculate the statistical bell curve. Even if we calculate it very precisely, we will only be able to predict some range of values at a specified confidence level. Further improvements in the calculation will not lead to a dramatic increase in the accuracy of the prediction. If the observed value falls within the predicted range, there will still be a lingering doubt that this happened by sheer dumb luck. If it doesn’t, there will be doubt that the theory might still be correct, but we just happened to be among a small percentage of observers at the tails of the bell curve.
It’s little wonder that, given a choice, physicists would not give up their old paradigm in favor of anthropic selection. But nature has already made her choice. We only have to find out what it is. If the constants of nature vary from one part of the universe to another, then, whether we like it or not, the best we can do is to make statistical predictions based on the principle of mediocrity.
The observed value of the cosmological constant gives a strong indication that there is indeed a huge multiverse out there. It is within the range of values predicted from anthropic considerations, and there seem to be no credible alternatives. The evidence for the multiverse is, of course, indirect, as it will always be. This is a circumstantial case, where we are not going to hear eyewitness accounts or see the murder weapon. But if, with some luck, we make a few more successful predictions, we may still be able to prove the case beyond a reasonable doubt.
A Theory of Everything
What I am really interested in is whether God could have made the world in a different way; that is, whether the necessity of logical simplicity leaves any freedom at all.
—ALBERT EINSTEIN
T
he anthropic picture of the world hinges on the assumption that the constants of nature can vary from one place to another. But can they really? This is a question about the fundamental theory of nature: Will it yield a unique set of constants, or will it allow a wide range of possibilities?
We don’t know what the fundamental theory is, and there is no guarantee that it really exists, but the quest for the final, unified theory inspires much of the current research in particle physics. The hope is that beneath the plurality of particles and the differences between the four basic interactions, there is a single mathematical law that governs all elementary phenomena. All particle properties and the laws of gravitation, electromagnetism, and strong and weak interactions would follow from this law, just as all theorems of geometry follow from Euclid’s five axioms.
The kind of explanation for the particle properties that physicists hope to find in the final theory is well illustrated by how the chemical properties
of the elements were explained in quantum mechanics. Early in the last century, atoms were thought to be the fundamental building blocks of matter. Each type of atom represents a different chemical element, and chemists had accumulated a colossal amount of data on the properties of the elements and their interactions with one another. Ninety-two different elements were known at the time—a bit too many, you might say, for the fundamental building blocks. Thankfully, the work of the Russian chemist Dmitry Mendeleyev in the late nineteenth century revealed some regularity in this mountain of data. Mendeleyev arranged elements in a table in order of increasing atomic weight and noticed that elements with similar chemical properties appeared at regular intervals throughout the table.
ba
Nobody could explain, however, why the elements followed this periodic pattern.
By 1911 it became clear that atoms were not fundamental after all. Ernest Rutherford demonstrated that an atom consisted of a swarm of electrons orbiting a small, heavy nucleus. A quantitative understanding of the atomic structure was achieved in the 1920s, with the development of quantum mechanics. It turned out, roughly, that electron orbits form a series of concentric shells around the nucleus. Each shell can hold only up to a certain number of electrons. So, as we add more electrons, the shells gradually fill up. The chemical properties of an atom are determined mainly by the number of electrons in its outermost shell. As a new shell is filling up, the properties of the elements follow closely those of the preceding shell.
bb
This explains the periodicity of Mendeleyev’s table.
For a few brief years it seemed that the fundamental structure of matter was finally understood. Paul Dirac, one of the founders of quantum mechanics, declared in his 1929 paper that “the underlying physical laws necessary for the mathematical theory of a larger part of physics and the whole of chemistry are thus completely known.” But then, one by one, new “elementary” particles began to pop up.
To start with, the atomic nuclei turned out to be composite, consisting of protons and neutrons held together by the strong nuclear force. Then the
positron was discovered, and following that the muon.
bc
When protons were smashed into one another in particle accelerators, new types of short-lived particles showed up. This did not necessarily mean that protons were made of those particles. If you smash two TV sets together, you can be sure that the things flying out in the debris are the parts the TV sets were originally made of. But in the case of colliding protons, some of the resulting particles were heavier than the protons themselves, with the extra mass coming from the kinetic energy of motion of the protons. So, these collision experiments did not reveal much about the internal structure of the proton, but simply extended the particle zoo. By the end of the 1950s, the number of particles exceeded that of the known chemical elements.
bd
One of the pioneers of particle physics, Enrico Fermi, said that if he could remember the names of all the particles, he would become a botanist.
1
The breakthrough that introduced order into this unruly crowd of particles was made in the early 1960s, independently, by Murray Gell-Mann of Caltech and Yuval Ne’eman, an Israeli military officer who took leave to complete his Ph.D. in physics. They noticed that all strongly interacting particles fell into a certain symmetric pattern. Gell-Mann and independently George Zweig of CERN (the European Centre for Nuclear Research) later showed that the pattern could be neatly accounted for if all these particles were composed of more fundamental building blocks, which Gell-Mann called
quarks
. This reduced the number of elementary particles, but not by much: quarks come in three “colors” and six “flavors,” so there are eighteen different quarks and as many antiquarks. Gell-Mann was awarded the 1969 Nobel Prize for uncovering the symmetry of strongly interacting particles.
In a parallel development, a somewhat similar symmetry was discovered for the particles interacting through weak and electromagnetic forces. The key role in the formulation of this electroweak theory was played by Harvard physicists Sheldon Glashow and Steven Weinberg and the Pakistani physicist Abdus Salam. They shared the 1979 Nobel Prize for this work. Classification of particles according to symmetries played a role analogous to that of the periodic table in chemistry. In addition, three types of
“messenger” particles, which mediate the three basic interactions, were identified: photons for the electromagnetic force, W and Z particles for the weak force, and eight
gluons
for the strong force. All these ingredients provided a basis for the
Standard Model
of particle physics.
The development of the Standard Model was completed in the 1970s. The resulting theory gave a precise mathematical scheme that could be used to determine the outcomes of encounters between any known particles. This theory has been tested in countless accelerator experiments, and as of now it is supported by all the data. The Standard Model also predicted the existence and properties of W and Z particles and of an additional quark—all later discovered. By all these accounts, it is a phenomenally successful theory.
And yet, the Standard Model is obviously too baroque to qualify as the ultimate theory of nature. The model contains more than sixty elementary particles—not a great improvement over the number of elements in Mendeleyev’s table. It includes nineteen adjustable parameters, which had to be determined from experiments but are completely arbitrary as far as the theory is concerned. Furthermore, one important interaction—gravity—is left out of the model.
2
The success of the Standard Model tells us that we are on the right track, but its shortcomings indicate that the quest should continue.
3
The omission of gravity in the Standard Model is not just an oversight. On the face of it, gravity appears to be similar to electromagnetism. Newton’s gravitational force, for example, has the same inverse square dependence on the distance as Coulomb’s electric force. However, all attempts to develop a quantum theory of gravity along the same lines as the theory of electromagnetism, or other interactions in the Standard Model, encountered formidable problems.
The electric force between two charged particles is due to a constant exchange of photons. The particles are like two basketball players who pass the ball back and forth to one another as they run along the court. Similarly, the gravitational interaction can be pictured as an exchange of gravitational field quanta, called
gravitons
. And indeed, this description works rather well,
as long as the interacting particles are far apart. In this case, the gravitational force is weak and the spacetime is nearly flat. (Remember, gravity is related to the curvature of spacetime.) The gravitons can be pictured as little humps bouncing between the particles in this flat background.
At very small distances, however, the situation is completely different. As we discussed in Chapter 12, quantum fluctuations at short distance scales give the spacetime geometry a foamlike structure (see
Figure 12.1
). We have no idea how to describe the motion and interaction of particles in such a chaotic environment. The picture of particles moving through a smooth spacetime and shooting gravitons at one another clearly does not apply in this regime.
Effects of quantum gravity become important only at distances below the Planck length—an unimaginably tiny length, 10
25
times smaller than the size of an atom. To probe such distances, particles have to be smashed at tremendous energies, far beyond the capabilities of the most powerful accelerators. On much larger distance scales, which are accessible to observation, quantum fluctuations of spacetime geometry average out and quantum gravity can be safely ignored. But the conflict between Einstein’s general relativity and quantum mechanics cannot be ignored in our search for the ultimate laws of nature. Both gravity and quantum phenomena have to be accounted for in the final theory. Thus, leaving gravity out is not an option.
Most physicists now place their hopes on a radically new approach to quantum gravity—the theory of strings. This theory provides a unified description for all particles and all their interactions. It is the most promising candidate we have ever had for the fundamental theory of nature.
According to string theory, particles like electrons or quarks, which seem to be pointlike and were thought to be elementary, are in fact tiny vibrating loops of string. The string is infinitely thin, and the length of the little loops is comparable to the Planck length. The particles appear to be structureless points because the Planck length is so tiny.
The string in little loops is highly taut, and this tension causes the loops to vibrate, in a way similar to the vibrating strings in a violin or piano. Different
vibration patterns on a straight string are illustrated in
Figure 15.1
. In these patterns, which correspond to different musical notes, the string has a wavy shape, with several complete half-waves fitting along its length. The larger the number of half-waves, the higher the note. Vibration patterns of loops in string theory are similar (see
Figure 15.2
), but now different patterns correspond to different types of particles, rather than different notes. The properties of a particle—for example, its mass, electric charge, and charges with respect to weak and strong interactions—are all determined by the exact vibrational state of the string loop. Instead of introducing an independent new entity for each type of particle, we now have a single entity—the string—of which all particles are made.
Figure 15.1
.
Vibration patterns of a straight string.
Figure 15.2
.
A schematic representation of vibration patterns of a string loop.
The messenger particles—photons, gluons, W, and Z—are also vibrating little loops, and particle interactions can be pictured as string loops splitting and joining. What is most remarkable is that the spectrum of string states necessarily includes the graviton—the particle mediating gravitational interactions. The problem of unifying gravity with other interactions does not exist in string theory: on the contrary, the theory cannot be constructed without gravity.
The conflict between gravity and quantum mechanics has also disappeared. As we have just discussed, the problem was due to quantum fluctuations of the spacetime geometry. If particles are mathematical points, the fluctuations go wild in the vicinity of the particles and the smooth spacetime continuum turns into a violent spacetime foam. In string theory, the little loops of string have a finite size, which is set by the Planck length. This is precisely the distance scale below which quantum fluctuations get out of control. The loops are immune to such sub-Planckian fluctuations: the spacetime foam is tamed just when it is about to start causing trouble. Thus, for the first time we have a consistent quantum theory of gravity.
The idea that particles may secretly be strings was suggested in 1970 by Yoichiro Nambu of the University of Chicago, Holger Nielsen of the Niels Bohr Institute, and Leonard Susskind, then at Yeshiva University. String theory was first meant to be a theory of strong interactions, but it was soon found that it predicted the existence of a massless boson, which had no counterpart among the strongly interacting particles. The key realization that the massless boson had all the properties of the graviton was made in 1974 by John Schwarz of Caltech and Joel Scherk of the École Normale Supérieure. It took another ten years for Schwarz, working in collaboration with Michael Green of Queen Mary College in London, to resolve some subtle mathematical issues and show that the theory was indeed consistent.
String theory has no arbitrary constants, so it does not allow any tinkering or adjustments. All we can do is to uncover its mathematical framework and see whether or not it corresponds to the real world. Unfortunately, the
mathematics of the theory is incredibly complex. Now, after twenty years of assault by hundreds of talented physicists and mathematicians, it is still far from being fully understood. At the same time, this research revealed a mathematical structure of amazing richness and beauty. This, more than anything else, suggests to the physicists that they are probably on the right track.
4
BOOK: Many Worlds in One: The Search for Other Universes
6.75Mb size Format: txt, pdf, ePub
ads

Other books

Scarlet Widow by Graham Masterton
Kye's Heart by Marisa Chenery
Pnin by Vladimir Nabokov
Lord of the Isles by David Drake
The Case of the Sulky Girl by Erle Stanley Gardner
Anchor of Hope by Kiah Stephens
The Midnight Tour by Richard Laymon
Renegade Alpha (ALPHA 5) by Carole Mortimer