Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (11 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
8.86Mb size Format: txt, pdf, ePub

So is losing the resource race against competitors. Therefore, the superintelligent system will devote resources to developing speed sufficient to beat them. It follows that unless we’re very careful about how we create superintelligence, we could initiate a future in which powerful, acquisitive machines, or their probes, race across the galaxy harvesting resources and energy at nearly light speed.

*   *   *

It is darkly comic, then, that the first communiqué other life in the galaxy might receive from Earth could be a chipper, radioed “hello,” followed by a withering death hail of rocket-propelled nano-factories. In 1974, Cornell University broadcast the “Arecibo message” to commemorate the renovation of the Arecibo radio telescope. Designed by SETI’s founder Francis Drake, astronomer Carl Sagan, and others, the message contained information about human DNA, the population of Earth, and our location. The radio broadcast was aimed at star cluster MI3, some 25,000 light-years away. Radio waves travel at the speed of light, so the Arecibo message won’t get there for 25,000 years, and even then it won’t get there. That’s because M13 will have moved from its 1974 location, relative to Earth. Of course the Arecibo team knew this, but milked the press opportunity anyway.

Still, other star systems might be more profitable targets for radio telescope probes. And what intelligence they detect might not be biological.

This assertion came from SETI (which stands for the Search for Extraterrestrial Intelligence). Headquartered in Mountain View, California, just a few blocks from Google, the now fifty-year-old organization tries to detect signs of alien intelligence coming from as far away as 100 trillion miles. To catch alien radio transmissions, they’ve planted forty-two giant dish-shaped radio telescopes three hundred miles north of San Francisco. SETI
listens
for signals—it doesn’t send them—and in a half century they’ve heard nothing from ET. But they’ve established a vexing certainty relevant to the unimpeded spread of ASI: our galaxy is sparsely populated, and nobody knows why.

SETI chief astronomer Dr. Seth Shostak has taken a bold stance on
what
exactly we might find, if we ever find anything. It’ll include artificial, not biological, intelligence.

He told me, “What we’re looking for out there is an evolutionary moving target. Our technological advances have taught us that nothing stays still for long. Radio waves, which is what we’re listening for, are made by
biological
entities. The window between when you make yourself visible with radio waves and when you start building machines much better than yourselves, thinking machines, that’s a few centuries. No more than that. So you’ve invented your successors.”

In other words there’s a relatively brief time period between the technological milestones of developing radio and developing advanced AI for any intelligent life. Once you’ve developed advanced AI, it takes over the planet or merges with the radio-makers. After that, no more need for radio.

Most of SETI’s radio telescopes are aimed at the “Goldilocks zones” of stars close to earth. That zone is close enough to the star to support liquid on its surface which isn’t frozen or boiling. It must be “just right” for life, hence the term from the story “Goldilocks and the Three Bears.”

Shostak argues that SETI should point
some
of its receivers toward corners of the galaxy that would be inviting to artificial rather than biological alien intelligence, a “Goldilocks zone” for AI. These would be areas dense with energy—young stars, neutron stars, and black holes.

“I think we could spend at least a few percent of our time looking in the directions that are maybe not the most attractive in terms of biological intelligence but maybe where sentient machines are hanging out. Machines have different needs. They have no obvious limits to the length of their existence, and consequently could easily dominate the intelligence of the cosmos. Since they can evolve on timescales far, far shorter than biological evolution, it could very well be that the first machines on the scene thoroughly dominate the intelligence in the galaxy. It’s a ‘winner take all’ scenario.”

Shostak has made a connection between contemporary computer clouds, like those owned by Google, Amazon, and Rackspace Inc., and the kinds of high-energy, super-cold environments superintelligent machines will need. One frigid example is Bok globules—dark clouds of dust and gas where the temperature is about 441 degrees below zero Fahrenheit, almost two hundred degrees colder than most interstellar space. Like Google’s cloud computing arrays of today, hot-running thinking machines of the future might need to stay cool, or risk meltdown.

Shostak’s assertions about where to find AI tell us that the idea of intelligence leaving Earth in search of resources has fired up more high-level imaginations than just those of Omohundro and the folks at MIRI. Unlike them, however, Shostak doesn’t think superintelligence will be dangerous.

“If we build a machine with the intellectual capability of one human, within five years, its successor will be more intelligent than all of humanity combined. After one generation or two generations, they’d just ignore us. Just the way you ignore the ants in your backyard. You don’t wipe them out, you don’t make them your pets, they don’t have much influence over your daily life, but they’re still there.”

The trouble is, I
do
wipe out ants in my backyard, particularly when a trail of them leads into the kitchen. But here’s the disconnect—an ASI would travel into the galaxy, or send probes, because it’s used up the resources it needs on Earth, or it calculates they’ll be used up soon enough to justify expensive trips into space. And if that’s the case, why would we still be alive, when keeping us alive would probably use many of the same resources? And don’t forget, we ourselves are composed of matter the ASI may have other uses for.

In short, for Shostak’s happy ending to be plausible, the superintelligence in question will have to
want
to keep us alive. It’s not sufficient that they ignore us. And so far there is no accepted ethical system, or a clear way to implement one, in an advanced AI framework.

But there is a young science of understanding a superintelligent agent’s behavior. And Omohundro has pioneered it.

*   *   *

So far we’ve explored three drives that Omohundro argues will motivate self-aware, self-improving systems: efficiency, self-protection, and resource acquisition. We’ve seen how all of these drives will lead to very bad outcomes without extremely careful planning and programming. And we’re compelled to ask ourselves, are we capable of such careful work? Do you, like me, look around the world at expensive and lethal accidents and wonder how we’ll get it right the first time with very strong AI? Three-Mile Island, Chernobyl, Fukushima—in these nuclear power plant catastrophes, weren’t highly qualified designers and administrators trying their best to avoid the disasters that befell them? The 1986 Chernobyl meltdown occurred during a
safety
test.

All three disasters are what organizational theorist Charles Perrow would call “normal accidents.” In his seminal book
Normal Accidents: Living with High-Risk Technologies,
Perrow proposes that accidents, even catastrophes, are “normal” features of systems with complex infrastructures. They have a high degree of incomprehensibility because they involve failures in more than one, often unrelated, process. Separate errors—none of which would be fatal on its own—combine to make system-wide failures that could not have been predicted.

At Three Mile Island on March 28, 1979, four simple failures set up the disaster: two cooling system pumps stopped operating due to mechanical problems; two emergency feed water pumps couldn’t work because their valves were closed for maintenance; a repair tag hid indicator lights that would’ve warned of the issue—a valve releasing coolant stuck open, and a malfunctioning light indicated that the same valve had closed. Net result: core meltdown, loss of life narrowly averted, and a near-fatal blow to the United States’ nuclear energy industry.

Perrow writes, “We have produced designs so complicated that we cannot possibly anticipate all the possible interactions of the inevitable failures; we add safety devices that are deceived or avoided or defeated by hidden paths in the systems.”

Especially vulnerable, Perrow writes, are systems whose components are “tightly coupled,” meaning they have immediate, serious impacts on each other. One glaring example of the perils of tightly coupled AI systems occurred in May 2010 on Wall Street.

Up to 70 percent of all Wall Street’s equity trades are made by about eighty computerized high-frequency trading systems (HFTs). That’s about a billion shares a day. The trading algorithms and the supercomputers that run them are owned by banks, hedge funds, and firms that exist solely to execute high-frequency trades. The point of HFTs is to earn profits on split-second opportunities—for example, when the price of one security changes and the prices of those that should be equivalent don’t immediately change in synch—and to seize
many
of these opportunities each day.

In May 2010, Greece was having difficulty refinancing its national debt. European countries who’d loaned money to Greece were wary of a default. The debt crises weakened Europe’s economy, and made the U.S. market fragile. All it took to trigger an accident was a frightened trader from an unidentified brokerage company. He ordered the immediate sale of $4.1 billion of futures contracts and ETFs (exchange traded funds) related to Europe.

After the sale, the price of the futures contracts (E-Mini S&P 500) fell 4 percent in four minutes. High-frequency trade algorithms (HTFs) detected the price drop. To lock in profits, they automatically triggered a sell-off, which occurred in milliseconds (the fastest buy or sell order is currently three milliseconds—three one-thousandths of a second). The lower price automatically triggered
other
HTFs to
buy
E-Mini S&P 500, and to sell other equities to get the cash to do so. Faster than humans could intervene, a cascading chain reaction drove the Dow down 1,000 points. It all happened in twenty minutes.

Perrow calls this problem “incomprehensibility.” A normal accident involves interactions that are “not only unexpected, but are incomprehensible for some critical period of time.” No one anticipated how the algorithms would affect the others, so no one could comprehend what was happening.

Finance risk analyst Steve Ohana acknowledged the problem. “It’s an emerging risk,” he said. “We know that a lot of algorithms interact with each other but we don’t know in exactly what way. I think we have gone too far in the computerization of finance. We cannot control the monster we have created.”

That monster struck again on August 1, 2012. A badly programmed HFT algorithm caused investment firm Knight Capital Partners to lose $440 million in just thirty minutes.

These crashes have elements of the kind of AI disaster I anticipate: highly complex, almost unknowable AI systems, unpredictable interactions with other systems and with a broader information technology ecology, and errors occurring at computer scale speeds, rendering human intervention futile.

*   *   *

“An agent which sought only to satisfy the efficiency, self-preservation, and acquisition drives would act like an obsessive paranoid sociopath,” writes Omohundro in “The Nature of Self-Improving Artificial Intelligence.” Apparently all work and no play makes AI bad company indeed. A robot that had only the drives we’ve discussed so far would be a mechanical Genghis Khan, seizing every resource in the galaxy, depriving every competitor of life support, and destroying enemies who wouldn’t pose a threat for a thousand years. And there’s still one drive more to add to the volatile brew—creativity.

The AI’s fourth drive would cause the system to generate new ways to more efficiently meet its goals, or rather, to avoid outcomes in which its goals aren’t as optimally satisfied as they could be. The creativity drive would mean less predictability in the system (gulp) because creative ideas are
original
ideas. The more intelligent the system, the more novel its path to goal achievement, and the farther beyond our ken it may be. A creative drive would help maximize the other drives—efficiency, self-preservation, and acquisition—and come up with work-arounds when its drives are thwarted.

Suppose, for example, that your chess-playing robot’s main goal is to win chess games against any opponent. When pitted against another chess-playing robot, it immediately hacks into the robot’s CPU and cuts its processor speed to a crawl, giving your robot a decisive advantage. You respond, “Hold on a minute, that’s not what I meant!” You code into your robot a subgoal that prohibits it from hacking into opponents’ CPUs, but before the next game, you discover your robot
building
an assistant robot that then hacks into its opponent’s CPU! When you prohibit building robots, it
hires
one! Without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals.

This is an instance of AI’s unintended consequences problem, a problem so big and pervasive it’s like citing the “water problem” when discussing seagoing vessels. A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right. It may take a whole
other
AI with greater intelligence than yours to determine whether or not your AI-powered robot plans to strap you to your bed and stick electrodes into your ears in an effort to make you safe and happy.

There is another important way to look at the problems of AI’s drives, one that’s more suited to the positive-minded Omohundro. The drives represent opportunities—doors opening for mankind and our aspirations, not slamming shut. If we don’t want our planet and eventually our galaxy to be populated by strictly self-serving, ceaselessly self-replicating entities, with a Genghis Khanish attitude toward biological creatures and one another, then AI makers should create goals for their systems that embrace human values. On Omohundro’s wish list are: “make people happy,” “produce beautiful music,” “entertain others,” “create deep mathematics,” and “produce inspiring art.” Then stand back. With these goals, an AI’s creativity drive would kick into high gear and respond with life-enriching creations.

Other books

Gravity's Rainbow by Thomas Pynchon
A Roast on Sunday by Robinson, Tammy
A Long Pitch Home by Natalie Dias Lorenzi
The Gates of Winter by Mark Anthony
Another Kind of Life by Catherine Dunne
Maxon by Christina Bauer
Texas Tender by Leigh Greenwood
The Invitation by Jude Deveraux