Everything Is Obvious (19 page)

Read Everything Is Obvious Online

Authors: Duncan J. Watts

BOOK: Everything Is Obvious
2.32Mb size Format: txt, pdf, ePub

A century later, the French mathematician and astronomer Pierre-Simon Laplace pushed Newton’s vision to its logical extreme, claiming in effect that Newtonian mechanics had reduced the prediction of the future—even the future of the universe—to a matter of mere computation. Laplace envisioned an “intellect” that knew all the forces that “set nature in motion, and all positions of all items of which nature is composed.” Laplace went on, “for such an intellect nothing
would be uncertain and the future just like the past would be present before its eyes.”
7

The “intellect” of Laplace’s imagination eventually received a name—“Laplace’s demon”—and it has been lurking around the edges of mankind’s view of the future ever since. For philosophers, the demon was controversial because in reducing the prediction of the future to a mechanical exercise, it seemed to rob humanity of free will. As it turned out, though, they needn’t have worried too much. Starting with the second law of thermodynamics, and continuing through quantum mechanics and finally chaos theory, Laplace’s idea of a clockwork universe—and with it the concerns about free will—has been receding for more than century now. But that doesn’t mean the demon has gone away. In spite of the controversy over free will, there was something incredibly appealing about the notion that the laws of nature, applied to the appropriate data, could be used to predict the future. People of course had been making predictions about the future since the beginnings of civilization, but what was different about Laplace’s boast was that it wasn’t based on any claim to magical powers, or even special insight, that he possessed himself. Rather it depended only on the existence of scientific laws that in principle anyone could master. Thus prediction, once the realm of oracles and mystics, was brought within the objective, rational sphere of modern science.

In doing so, however, the demon obscured a critical difference between two different sorts of processes, which for the sake of argument I’ll call simple and complex.
8
Simple systems are those for which a model can capture all or most of the variation in what we observe. The oscillations of pendulums and the orbits of satellites are therefore “simple” in this sense, even though it’s not necessarily a simple matter to
be able to model and predict them. Somewhat paradoxically, in fact, the most complicated models in science—models that predict the trajectories of interplanetary space probes, or pinpoint the location of GPS devices—often describe relatively simple processes. The basic equations of motion governing the orbit of a communications satellite or the lift on an aircraft wing can be taught to a high-school physics student. But because the difference in performance between a good model and a slightly better one can be critical, the actual models used by engineers to build satellite GPS systems and 747s need to account for all sorts of tiny corrections, and so end up being far more complicated. When the NASA Mars Climate Orbiter burned up and disintegrated in the Martian atmosphere in 1999, for example, the mishap was traced to a simple programming error (imperial units were used instead of metric) that put the probe into an orbit of about 60km instead of 140km from Mars’s surface. When you consider that in order to get to Mars, the orbiter first had to traverse more than 50 million kilometers, the magnitude of the error seems trivial. Yet it was the difference between a triumphant success for NASA and an embarrassing failure.

Complex systems are another animal entirely. Nobody really agrees on what makes a complex system “complex” but it’s generally accepted that complexity arises out of many interdependent components interacting in nonlinear ways. The U.S. economy, for example, is the product of the individual actions of millions of people, as well as hundreds of thousands of firms, thousands of government agencies, and countless other external and internal factors, ranging from the weather in Texas to interest rates in China. Modeling the trajectory of the economy is therefore not like modeling the trajectory of a rocket. In complex systems, tiny disturbances in one part of
the system can get amplified to produce large effects somewhere else—the “butterfly effect” from chaos theory that came up in the earlier discussion of cumulative advantage and unpredictability. When every tiny factor in a complex system can get potentially amplified in unpredictable ways, there is only so much that a model can predict. As a result, models of complex systems tend to be rather simple—not because simple models perform well, but because incremental improvements make little difference in the face of the massive errors that remain. Economists, for example, can only dream of modeling the economy with the same kind of accuracy that led to the destruction of the Mars Climate Orbiter. The problem, however, is not so much that their models are bad as that all models of complex systems are bad.
9

The fatal flaw in Laplace’s vision, therefore, is that his demon works only for simple systems. Yet pretty much everything in the social world—from the effect of a marketing campaign to the consequences of some economic policy or the outcome of a corporate plan—falls into the category of complex systems. Whenever people get together—in social gatherings, sports crowds, business firms, volunteer organizations, markets, political parties, or even entire societies—they affect one another’s thinking and behavior. As I discussed in
Chapter 3
, it is these interactions that make social systems “social” in the first place—because they cause a collection of people to be something other than just a collection of people. But in the process they also produce tremendous complexity.

THE FUTURE IS NOT LIKE THE PAST

The ubiquity of complex systems in the social world is important because it severely restricts the kinds of predictions
we can make. In simple systems, that is, it is possible to predict with high probability what will
actually
happen—for example when Halley’s Comet will next return or what orbit a particular satellite will enter. For complex systems, by contrast, the best that we can hope for is to correctly predict the
probability
that something will happen.
10
At first glance, these two exercises sound similar, but they’re fundamentally different. To see how, imagine that you’re calling the toss of a coin. Because it’s a random event, the best you can do is predict that it will come up heads, on average, half the time. A rule that says “over the long run, 50 percent of coin tosses will be heads, and 50 percent will be tails” is, in fact, perfectly accurate in the sense that heads and tails do, on average, show up exactly half the time. But even knowing this rule, we still can’t correctly predict the
outcome
of a single coin toss any more than 50 percent of the time, no matter what strategy we adopt.
11
Complex systems are not really random in the same way that a coin toss is random, but in practice it’s extremely difficult to tell the difference. As the Music Lab experiment demonstrated earlier, you could know everything about every person in the market—you could ask them a thousand survey questions, follow them around to see what they do, and put them in brain scanners while they’re doing it—and still the best you could do would be to predict the probability that a particular song will be the winner in any particular virtual world. Some songs were more likely to win on average than others, but in any given world the interactions between individuals magnified tiny random fluctuations to produce unpredictable outcomes.

To understand why this kind of unpredictability is problematic, consider another example of a complex system about which we like to make predictions—namely, the weather. At
least in the very near future—which generally means the next forty-eight hours—weather predictions are actually pretty accurate, or as forecasters call it, “reliable.” That is, of the days when the weather service says there is a 60 percent chance of rain, it does, in fact, rain on about 60 percent of them.
12
So why is it that people complain about the accuracy of weather forecasts? The reason is not that they aren’t reliable—although possibly they could be more reliable than they are—but rather that reliability isn’t the kind of accuracy that we want. We don’t want to know what is going to happen 60 percent of the time on days like tomorrow. Rather, we want to know what is actually going to happen tomorrow—and tomorrow, it will either rain or it will not. So when we hear “60 percent chance of rain tomorrow,” it’s natural to interpret the information as the weather service telling us that it’s probably going to rain tomorrow. And when it fails to rain almost half the times we listen to them and take an umbrella to work, we conclude that they don’t know what they’re talking about.

Thinking of future events in terms of probabilities is difficult enough for even coin tossing or weather forecasting, where more or less the same kind of thing is happening over and over again. But for events that happen only once in a lifetime, like the outbreak of a war, the election of a president, or even which college you get accepted to, the distinction becomes almost impossible to grasp. What does it mean, for example, to have said the day before Barack Obama’s victory in the 2008 presidential election that he had a 90 percent chance of winning? That he would have won nine out of ten attempts? Clearly not, as there will only ever be one election, and any attempt to repeat it—say in the next election—will not be comparable in the way that consecutive coin tosses
are. So does it instead translate to the odds one ought to take in a gamble? That is, to win $10 if he is elected, I will have to bet $9, whereas if he loses, you can win $10 by betting only $1? But how are we to determine what the “correct” odds are, seeing as this gamble will only ever be resolved once? If the answer isn’t clear to you, you’re not alone—even mathematicians argue about what it means to assign a probability to a single event.
13
So if even they have trouble wrapping their heads around the meaning of the statement that “the probability of rain tomorrow is 60 percent,” then it’s no surprise that the rest of us do as well.

The difficulty that we experience in trying to think about the future in terms of probabilities is the mirror image of our preference for explanations that account for known outcomes at the expense of alternative possibilities. As discussed in the previous chapter, when we look back in time, all we see is a sequence of events that happened. Yesterday it rained, two years ago Barack Obama was elected president of the United States, and so on. At some level, we understand that these events could have played out differently. But no matter how much we might remind ourselves that things might be other than they are, it remains the case that what actually happened, happened. Not 40 percent of the time or 60 percent of the time, but 100 percent of the time. It follows naturally, therefore, that when we think about the future, we care mostly about what will
actually happen
. To arrive at our prediction, we might contemplate a range of possible alternative futures, and maybe we even go as far as to determine that some of them are more likely than others. But at the end of the day, we know that only one such possible future will actually come to be, and we want to know which one that is.

The relationship between our view of the past and our view
of the future is illustrated in the figure on the facing page, which shows the stock price of a fictitious company over time. Looking back in time from the present, one sees the history of the stock (the solid line), which naturally traces out a unique path. Looking forward, however, all we can say about the stock price is its probability of falling within a particular range. My Yahoo! colleagues David Pennock and Dan Reeves have actually built an application that generates pictures like this one by mining data on the prices of stock options. Because the value of an option depends on the price of the underlying stock, the prices at which various options are being traded now can be interpreted as predictions about the price of the stock on the date when the option is scheduled to mature. More precisely, one can use the option prices to infer various “probability envelopes” like those shown in the figure. For example, the inner envelope shows the range of prices within which the stock is likely to fall with a 20 percent probability, while the outer envelope shows the 60 percent probability range.

We also know, however, that at some later time, the stock price will have been revealed—as indicated by the dotted “future” trajectory. At that time, we know the hazy cloud of probabilities defined by the envelope will have been replaced by a single, certain price at each time, just like prices that we can currently see in the past. And knowing this, it’s tempting to take the next step of assuming that this future trajectory has in some cosmic sense already been determined, even if it has not yet been revealed to us. But this last step would be a mistake. Until it is actually realized, all we can say about the future stock price is that it has a certain probability of being within a certain range—not because it actually lies somewhere in this range and we’re just not sure where it is, but in the stronger sense that it only exists
at all
as a range of probabilities. Put another way, there is a difference between
being uncertain about the future and the future itself being uncertain. The former is really just a lack of information—something we don’t know—whereas the latter implies that the information is, in principle, unknowable. The former is the orderly universe of Laplace’s demon, where if we just try hard enough, if we’re just smart enough, we can predict the future. The latter is an essentially random world, where the best we can ever hope for is to express our predictions of various outcomes as probabilities.

Other books

Working Wonders by Jenny Colgan
Lies of the Heart by Laurie Leclair
Blame It on the Mistletoe by Nicole Michaels
Stepbrother Master by Jackson, Ava
Hot Property by Carly Phillips
The War of the Ember by Kathryn Lasky