Chances Are (38 page)

Read Chances Are Online

Authors: Michael Kaplan

BOOK: Chances Are
12.79Mb size Format: txt, pdf, ePub
There are, however, two probabilistic reasons to mistrust weather forecasters. The first has to do with the relative likelihood versus the importance of different kinds of weather: it's relatively easy to forecast accurately another day of clear weather in a stable high-pressure system, but that's not a forecast most people notice or remember. Hailstorms and tornadoes are very memorable, but exceedingly difficult to predict for any one place or time: they occupy little space and their genesis is exquisitely sensitive to initial conditions. Even with a dense network of weather data, a sudden squall can thread its way through the observations like a cat through a closing door. This is what happened with the great UK storm of 1987, where the television forecaster squandered a lifetime's credibility by pooh-poohing a viewer's worry that a hurricane was on its way, only a few hours before the trees began to crash down in Hyde Park.
The second disadvantage that probability loads onto the forecaster is what's called the
base-rate effect
—and here, since prediction involves expectation, we come back to Bayes' theorem. As you'll remember, the theorem describes how our previous assumptions about the probability of an event are modified by evidence with a given intrinsic probability (or, in the case of weather forecasting, credibility).
This means our assumptions about forecasting a given event are based on the intrinsic accuracy of the forecast
times
the intrinsic likelihood of the event itself. So when an event is likely (rain in Bergen), an accurate forecast has a favorable effect on our assumptions. When the predicted event is unlikely, though, it drastically reduces our reasons to believe in
any
forecast, however accurate. In one celebrated case, the Finley Affair of 1884, a meteorologist claimed that his forecasts of whether a given place would see a tornado or not had an accuracy of 96.6 percent. This sounded very impressive until G. K. Gilbert pointed out that simply putting up a board with “No Tornado” painted on it would have a predictive accuracy of 98.2 percent. As with medical testing, accuracy means less in the context of rarity.
 
Richardson's Weather Factory exists, in a sense, as the European Centre for Medium-Range Weather Forecasts, a cooperation between 25 countries anxious to understand the effects of weather. Its meteorologists and supercomputers create deterministic forecasts for the next 3 to 6 days; but they also squarely take on chaos by issuing probabilistic 10-day forecasts for Europe. Here, the butterfly that will or will not create the storm in Spain hovers somewhere over the Northern Pacific. How do the forecasters decide whether it has flapped or not?
The answer is, they don't. Instead, they determine, based on the current weather, where the areas of greatest sensitivity are, and then vary parameters for those areas randomly (but within reasonable limits) in the computer, generating an arbitrary 51 alternative starting points on which to run the simulation. What comes out at the end, therefore, is not a single forecast but a probability distribution: look at any point on the map and you will have a curve of 51 possible values for wind speed, temperature, or pressure. If those values cluster closely around a mean in a statistically normal fashion, you can be sure that whatever the butterfly may be planning for this week, it will have little effect on you. If the values scatter randomly, you can be sure at least of uncertainty. It's called
ensemble prediction;
it makes predictability a variable, just like temperature or rainfall.
Of course, this isn't the way most of us are used to thinking about weather. “A mate rang me up on Monday—he was going to have a garden party on Saturday and he wanted to know if it was going to rain.” Tim Palmer, division head at the European Centre, is young, brisk, and fluent. “I said I could give him a probability. ‘I don't want that—just tell me if it's going to rain or not.' I said: ‘Look. Is the Queen coming to your party?' ‘What's that got to do with it?' ‘What's your risk if it rains and you don't have a tent? What probability of rain can you tolerate?' He thought about it and said: ‘It's just friends from work; I can tolerate a sixty percent probability.' In fact, the ensemble forecast gave a probability of around thirty-two percent. He didn't book the tent and it didn't rain. Thank God.”
The weather simulation on which the ensemble forecasts are run is a long way from the frictionless, seasonless world of Lorenz's early program. It includes the effects of ocean temperature, wave friction, and mountain ranges. “If you want to study weather nowadays, you need to have very broad knowledge: radiation, basic quantum physics, chemistry, marine biology, fluid mechanics—they're all involved in how the atmosphere works. The problem when you combine them into one model, though—with its millions and millions and millions of lines of code—is that no one person understands the whole thing anymore.”
The bigger difficulty remains: even though the scale of observation becomes ever smaller, the scale on which weather phenomena begin is always smaller still. Even if the world were surrounded by a lattice of sensors only one meter apart, things would happen unobserved between them, spawning unexpected major weather systems within days. Part of the answer, therefore, is to introduce a little
extra
randomness on this smallest scale. Take, for instance, inertia-gravity waves: these are tiny shudders in the atmosphere or ocean that arise when, say, air flows over rough ground or tide runs against pressure; you can sometimes see their trace in rows of small, high, chevron-shaped clouds, like nested eyebrows. These waves are usually too small to resolve in existing weather models, but there are circumstances when they can shape larger movements in the atmosphere. How can this rare but significant causation be accommodated in the predictive model? By introducing a random term; in effect, making the picture jiggle slightly at its smallest scale. Most of the time, this jiggling remains too small to affect the larger system; but sometimes it sets off changes that would not appear in a simplified, deterministic model that filtered out small-scale variation. This technique is called
stochastic resonance
and it has applications as widely separated as improving human visual perception and giving robots better balance. Adding randomness can actually aid precision.
Meteorologists are interested in the weather itself as a fascinating, complex system—but they also know the rest of us are primarily interested in its effects. We only want to know about the weather so we can exclude it from our portfolio of uncertainty. Tim Palmer is therefore working on ways to connect the probability gradients that come out of ensemble forecasting to the resource-allocation decisions that people and institutions need to make. “Let's take malaria in Africa. Its appearance and spread is very weather dependent: there are good models that connect the weather to the disease, but it's hard to use these to predict outbreaks because the way in which the weather is integrated into the model is quite complicated. Instead, we can attach the disease model directly to our ensemble prediction and generate, not a weather probability curve, but a
malaria
probability curve. The weather becomes just an intermediate variable. Health organizations can then make the decision to take preventative action wherever the malaria probability exceeds a particular threshold.”
This is the great difference between probabilistic forecasting and traditional forecasting: the decision is up to us. And as with all problems of free will, getting rid of Fate is psychologically difficult; the world becomes less easy the more we know about it. In the old days, when the smiling face on the evening news simply told us that overnight temperatures were going to be above freezing—and then a frost cracked our newly poured concrete or killed our newly sown wheat—we could blame it on the weatherman. Probabilistic forecasting offers a percentage chance (with, if you want to explore the data further, a confidence spread and a volatility); this will mean little until we can match it with our own personal percentages: our ratio of potential gain to loss, our willingness to assume risk. Once the weather ceases to be fate or fault, it becomes another term in our own constant calculation of uncertainty.
 
One day in 1946 Stanislaw Ulam lay convalescing in bed, playing solitaire. While others might lose themselves in the delicious indolence of illness and the turning cards, Ulam kept wondering exactly what the chances were that a random standard solitaire, laid out with 52 cards, would come out successfully. Perhaps we can count ourselves fortunate that he was not feeling well, since, after trying to solve the problem by purely combinatorial means, Ulam gave up—and discovered a less mathematically elegant but more generally fruitful approach: “I wondered whether a more practical method than ‘abstract thinking' might not be to lay it out, say, one hundred times and simply observe and count the number of successful plays.”
He described his idea to John von Neumann, who, with his dandyish streak, called it the “Monte Carlo method”—because it resembles building a roomful of roulette wheels and setting them all spinning to see how your bet on the whole system fares over the long term. This technique has spread through every discipline that requires an assessment of the sum of many individually unpredictable events, from colliding neutrons in atomic bombs, through financial market trades, to cyclones. When analysts say, “We'll run it through the computer,” they are usually talking about these probabilistic simulations.
Monte Carlo (or, for the less dandyish,
stochastic simulation
) means inviting the random into the heart of your calculations. It takes the results of observation—statistics—and feeds them back as prior probabilities. Let's say you know statistically how often a neutron is absorbed in a given interaction. You set up in your computer simulation a program that assigns the probability of absorption for that part of the system randomly, but with the randomness weighted according to your observation, like a roulette wheel laid out with red and black distributed according to your statistics. The randomly generated results are then fed into the next stage of the simulation, which is programmed in the same way. The whole linked simulation then becomes like an ensemble prediction: if you run it over and over again, you get a distribution that you can analyze, just as if it came from a real collection of experiments or observations. This may seem conceptually crude, but it can generate results as precise as you can afford with the time you have available and the power of your computer.
In the world of weather, Monte Carlo simulation gives insurers a handle on catastrophe. Until about 1980, any assessment of exposure to, say, hurricane loss was based on little more than corporate optimism or pessimism. Although cloaked in equations, the reasoning of many insurers ran along these simple lines: “Imagine the worst storm possible in this area. Assume our loss from it will be equal to the worst loss we've ever had, plus a percentage reflecting how much bigger we've grown since then. Then decide how often that worst storm is likely to happen and spread the potential loss over the years it won't.” Straightforward reasoning, but dangerous—not just because it involves guesswork and approximation, but because the wind doesn't work that way. It bloweth where it listeth: its power is released, not smoothly over wide areas, but savagely in narrow confines. It does not wait a decent interval before striking again; the “hundred-year storm” is a deceptively linear form of words disguising a non-linear reality.
Now, therefore, the computers of large insurers are given over to ensemble forecasting—taking, for instance, the data for known U.S. hurricanes and generating from them a simulated 50,000 years' worth of storms yet to come. Each of these thousands of stochastically generated tracks takes its eraser through a different range of insured property, and every major building is modeled separately for its vulnerability to wind from different directions. The result is a distribution not of weather, but of loss, allowing premiums to reflect the full range of potential claims. The apparent randomness of disaster (my house reduced to kindling; yours, next door, with its patio umbrella still in place) is not ignored but taken as the basis for the overall assessment of risk.
 
Probabilistic reasoning about weather means a shift from asking “What will happen?” to considering “What difference does what could happen make to me?” This is an obvious calculation for catastrophe, but it also extends into the normal variety of days: cool or balmy, damp or dry. When you are luxuriating in an unseasonably warm fall, think for a moment of the bad news it represents for the woolen industry. Well, if this is the weather the shepherd shuns, maybe he should do something about it.
He could, for instance, call Barney Schauble, whose company insures against things that normally happen. Schauble's background is financial, so he knows how prosaic, day-to-day risks can cause problems for many businesses: “Energy companies created this market, because weather makes such a difference to their demand; a warm winter in Denver or a cool summer in Houston can really throw off your income projections if you sell gas for heating or electricity for air conditioners. If you own a theme park, the weather can make a big difference: rain is bad, but rain on Saturday is worse and rain on the Saturday of Memorial Day weekend is worst of all.” The sums involved in even small variations can add up. The summer of 1995 in England and Wales was, in its modest way, unusually hot: temperatures were between 1°C and 3°C above average. The extra payout for the insurance industry that year, in claims for lost crops, lost energy consumption, lost clothing sales, and building damage through soil shrinkage and subsidence totaled well over $2 billion.
Barney Schauble's company focuses on the variable but generally predictable: those elements of the weather that have normal distributions around definable means. “Heating days, cooling days, precipitation above a threshold during a defined period: things that have many years of accurate data behind them. People come to us because they're exposed to a risk they simply can't avoid, but one that we can diversify. If it were a financial risk, most companies would know right away that they needed to hedge it. Weather is still a new market but it works much the same way; companies come to us to even out their expectations over time and make their balance sheets more predictable.

Other books

A Necessary Sin by Georgia Cates
Dark of the Moon by John Sandford
The Warlord Forever by Alyssa Morgan
Murder Al Fresco by Jennifer L. Hart
Yellowcake by Margo Lanagan
Eve by Iris Johansen
Bream Gives Me Hiccups by Jesse Eisenberg
One Fat Summer by Robert Lipsyte