The Age of the Unthinkable (6 page)

Read The Age of the Unthinkable Online

Authors: Joshua Cooper Ramo

BOOK: The Age of the Unthinkable
10.97Mb size Format: txt, pdf, ePub

Of course, none of the scientists who would later make up the biological-risks committee needed this particular bit of news
to begin thinking about the world’s vulnerability to chemical or bioweapons. “Simply enormous,” one of the committee members
told me when I asked him about America’s exposure to the dangers of bioviolence or bioaccident. Such worries had haunted these
scientists for years, with the mental constancy that you or I might devote to worrying about a sick family member or an impending
exam we fear we are bound to fail. Anthrax letters mailed by a rogue lab rat were a small threat compared to the list the
bioscientists carried around in their heads: mutant genes directed at American wheat, say, or superpersistent versions of
viruses such as Ebola, distributed via suicidal, self-infected human biobombs. By the end of 2006 it was possible to download
the complete genetic recipe for smallpox from the Internet; you could order or make most of the deadly virus’s base pairs
with similar ease. Within a few years it would likely be possible to build a homemade, vaccine-resistant version of smallpox
for about the cost of a used car. Earlier in 2001, some of those same scientists at the National Academy meeting had participated
in a war game called Dark Winter, an epidemiological horror show in which a simulated smallpox outbreak, starting with a single
infected patient, killed millions in the span of a few weeks. The results were so upsetting that the Pentagon pulled the plug
before the simulation had run its course. A special presidential panel would warn a few years later, in 2008, that a bioattack
by 2013 was probably inevitable.

As the biological-risks committee was working through the usual list of suggestions — stockpile vaccines, develop air sniffers,
distribute medical-crisis kits to American hospitals — one member, a Princeton professor named Simon Levin, sat watching the
proceedings unfold, a quiet worry working up in his mind, ever louder. Levin is an ecologist, but an unlikely one. A mathematician
by training, he had stumbled into ecology when his passion for the environment led him into the particularly complex, hard-to-model
systems that abound in nature. The result was a set of groundbreaking 1970s studies based on his research in mussel shoals
off the coast of Washington State with Robert Paine (a man so fond of remote places and bivalves that he earned a reputation
as a sort of Edmund Hillary of wild mussels). Though Levin confesses to having “the opposite of whatever a green thumb for
fieldwork is,” his particular genius was the ability to take Paine’s fieldwork — extensive data that rarely fit elegantly
into the usual equations of natural science — and develop models in which math was less a straitjacket than a comfortable
robe.

We all remember Charles Darwin for explaining the process of evolution, but his notebooks also contain pages of failed attempts
to bring math to bear on the chaos of ecological development, a reminder that even genius hits a wall from time to time. Levin
was one of the first thinkers to get past Darwin’s wall. Starting in those mussel shoals, he began developing numerical pictures
of nature that had an impressive fidelity, particularly when describing what happens to ecological systems that are hit by
an unexpected shock. The models changed the face of ecology. Yet for all his fame among environmental scientists, Levin retains
the geeky demeanor of a mathematician: long silences punctuated by koan-like insights, an equal penchant for both intellectual
rigor and conversational digression. His Patagonia vests and Birkenstocks are often the only hint of the ecologist’s energy
that animates his thinking.

As Levin listened to the discussions at the National Academy of Sciences meetings, to the lists of things that needed to be
planned and positioned against the most reasonable of paranoias, he became convinced there was a deeper, more fundamental
problem that no one was talking about. Whatever he and the other minds on the committee could think of, Levin concluded, the
terrorists could think of something else. “We could build up stocks of every known vaccine on the planet,” he later recalled.
“But it wouldn’t matter. They could just engineer something we had never seen before.” As soon as civil defense planners stored
up protection against one biothreat, Levin suspected, terrorists would simply uncork a different, more horrible, more surprising
test tube. Or perhaps they would unleash something we
did
know how to deal with, but only to exhaust our doctors and hospitals and soldiers before releasing, hours later, some horror
we had never imagined. “This wasn’t some dumb game-theory model you were playing with,” he told me one afternoon as we sat
in his office at Prince-ton. “It was an
adaptive
enemy. Whatever you did, they still had an ability to think around it and surprise you. There was a limit to how much you
could prepare.” War planners used to look at threats around the world, hundreds of potential nightmares, and, as bad as they
all were, they could at least be numbered, ranked, monitored, anticipated. “Is there anything we haven’t thought of?” they
might ask. And they could feel with some certainty that there was not. But this new world? With destabilizing dangers which
emerged not only from crafty enemies but also from the day-to-day technology that we needed to survive — airplanes or genetic
engineering or commodities markets — it was very hard to find a spot where our normal lives ended and risk began.

Levin noticed something else that worried him. Complex problems like the ones the bioterror team was staring at have a particularly
eerie characteristic: they tend to become more complex as time goes on. The systems never get simpler. There was no moment
at which they would evaporate or condense into a single, easy-to-spot target such as the USSR. The 1979 Islamic revolution
in Iran, for example, was a single very knotty event that, in turn, gave birth to hundreds of jihadist groups, each of which
developed different methods of terror, particular techniques of attack and destruction, which themselves were always changing
and evolving. It was like a cruel scientific version of the old Middle Eastern quip that “friends come and go; enemies accumulate.”
Complexities accumulated. This was a security problem that could never be solved by traditional security alone.

Levin was suddenly very interested.

Recall that Levin’s most brilliant work had not involved the easy, everyday math of natural systems but rather those moments
of radical change, the big shifts caused by storms, extinctions, or new life. And what three decades of mussels, mathematics,
and other science had convinced him of was that when the system changed, you had to change the way you thought about it, or
else even carefully assembled data would appear to be meaningless mush. This was exactly the sort of problem Louis Halle described
when he talked about the dangers of making foreign policy with an image that was out of date and wrong.

One afternoon, as Levin and I were sitting together after lunch at the Princeton Faculty Club, we began wondering if we might
use the models he and others had developed to think about international affairs, to navigate the change to a more complex,
revolutionary world. I began spinning his own insights back to him, but with small changes: “international system” instead
of “nature,” “terrorists” instead of “viruses,” and so on. Levin nodded. The question he and I were facing together is the
one I want to turn to now: is there some model, some intellectual picture, that does a better job of capturing the dynamics
of the complex world around us than the models we saw in the last chapter? Can we find a way of understanding this revolutionary
age that doesn’t require us to do all the rounding and footnoting that doomed the old models?

This isn’t an easy challenge, but Levin was quick to point out that it was exactly the sort of evolution that science itself
had made in recent years. The economist Brian Arthur, a friend of Levin’s who noticed similar phenomena in his field, framed
the problem this way: “The story of the sciences in the twentieth century is one of a steady loss of certainty. Much of what
was real and machine-like and objective and deterministic at the start of the century, by mid-century was a phantom, unpredictable,
subjective and indeterminate.” Of course, during that same time, science had made more progress than it had in all of human
history. It wasn’t just Werner Heisenberg injecting uncertainty into quantum physics. It was Alfred Tarski bringing unpredictability
to mathematics, Kurt Gödel bringing incompleteness to logic, Benoit Mandelbrot doing the same for fluid dynamics and Gregory
Chaitin for information theory. They all proved that once you made the leap to a new model — if it was the right model — then
accepting uncertainty and indeterminacy allowed you to make sense of parts of the world you had never understood before. Problems
that seemed unapproachable by old methods became explainable: radioactivity, antimatter, the movement of light. “Sometimes
in science,” Levin said to me, “we find that we have reached the end of the bookshelf. Then it is time to write new books.”

2. The Sandpile

The Thomas J. Watson Research Center in Yorktown Heights, New York, is testament to the fact that since the founding of the
International Business Machines Corporation, IBM has always tried to produce ideas, not just boxes with plugs. In a way, the
lab is a giant, campus-sized expression of Watson’s famous admonition, once plastered on corporate walls all over the world:
“Think.” One of the virtues of working at the Watson lab is that you can chase down pretty much any idea that seems interesting,
whether or not it has much to do with computers or databases or anything that will ever be sold to anyone. The Watson lab
is a place for “pure” research, which generally means “free from commercial use.” And somehow, despite all the ups and downs
in the computer business, it has held on to this mandate.

The main building at the lab is a giant white limestone crescent that sits atop a small hill and affords a commanding view
of the surrounding Hudson River valley landscape. The Watson Center contains a mix of offices and labs, and the spirit of
fellowship makes it feel more like a university than a company. When I recently visited, the experiments under way in the
labs included everything from studies about how frogs think to why clouds “decide” to rain. The office of Glenn A. Held, a
physicist with a specialty in materials science, was for many years on the second floor. (Held left recently to join a hedge
fund, a switch from science without commerce —
Think.
— to its more lucrative opposite —
Earn!
). Held is a small, graying man with a visibly strong natural curiosity. He is the sort who would, as he once did with me,
take you to lunch and then drag you into a lecture on cellular electrical communication under the (incorrect) assumption that
even if you didn’t understand a word, it would still be interesting.

In the late 1980s, a few years after he had joined IBM, Held began noticing a great deal of discussion among scientists about
a conjecture made by a Danish physicist and biologist named Per Bak. Bak’s idea had to do with one of those things in science
that seem simple on the surface but that in fact contain many layers of complexity, layers that go far beyond what current
knowledge can explain. With Galileo, for instance, those famous balls he dropped off the Tower of Pisa to measure gravity’s
pull represented such a case: simple on the surface but loaded, in reality, with deep problems. Why
did
a heavy ball and a light ball take the same amount of time to reach the ground? This was a puzzle, one of those one-sentence
questions that take centuries to answer. The problem that fascinated Bak also appeared, on the surface, simple enough: if
you piled sand, grain by grain, until it made a cone about the size of your fist, how would you know when that tiny pyramid
would have a little avalanche? After all, as the pile got taller, and the sides became steeper, it was inevitable that some
sand would slide off. Could you predict when? Could you predict how much? Simple question, terribly hard to answer.

Bak, who died in 2002 at age fifty-four, was called by one of his friends “the most American of Danish scientists.” What his
friend meant was that Bak liked arguing and inventing, preferably at the same time. The idea Bak had invented for his sandpile
was a radical one, a new way of looking at physics that, if he was right, had dramatic implications. Bak hypothesized that
after an initial period, in which the sand piled itself into a little cone, the stack would organize itself into instability,
a state in which adding just a single grain of sand could trigger a large avalanche — or nothing at all. What was radical
about his idea was that it implied that these sand cones, which looked relatively stable, were in fact deeply unpredictable,
that you had absolutely no way of knowing what was going to happen next, that there was a mysterious relationship between
input and output. You could see the way physics struggled against the very limits of language when confronted with such a
concept:
organized
instability? Bak wanted to know what exactly caused an avalanche to occur at any given moment. This was, it emerged, very
difficult to say — at least through traditional science. “Complex behavior in nature,” Bak explained, “reflects the tendency
of large systems to evolve into a poised ‘critical’ state, way out of balance, where minor disturbances may lead to events,
called avalanches, of all sizes.”

What Bak was trying to study wasn’t simply stacks of sand, but rather the underlying physics of the world. And this was where
the sandpile got interesting. He believed that sandpile energy, the energy of systems constantly poised on the edge of unpredictable
change, was one of the fundamental forces of nature. He saw it everywhere, from physics (in the way tiny particles amassed
and released energy) to the weather (in the assembly of clouds and the hard-to-predict onset of rainstorms) to biology (in
the stutter-step evolution of mammals). Bak’s sandpile universe was violent — and history-making. It wasn’t that he didn’t
see stability in the world, but that he saw stability as a passing phase, as a pause in a system of incredible — and unmappable
— dynamism. Bak’s world was like a constantly spinning revolver in a game of Russian roulette, one random trigger-pull away
from explosion.

Other books

The Diamond by King, J. Robert
The Red Queen by Meg Xuemei X
Jaywalking with the Irish by Lonely Planet
Mountain Fire by Brenda Margriet
A Wizard's Wings by T. A. Barron
Goalkeeper in Charge by Matt Christopher
Dead Island by Morris, Mark
A Croc Called Capone by Barry Jonsberg