Rise of the Robots: Technology and the Threat of a Jobless Future (36 page)

BOOK: Rise of the Robots: Technology and the Threat of a Jobless Future
13.67Mb size Format: txt, pdf, ePub

My own view is that something like the Singularity is certainly possible, but it is far from inevitable. The concept seems most useful when it is stripped of extraneous baggage (like assumptions about immortality) and instead viewed simply as a future period of dramatic technological acceleration and disruption. It might turn out that the essential catalyst for the Singularity—the invention of super-intelligence—ultimately proves impossible or will be achieved only in the very remote future.
*
A number of top researchers with expertise in brain science have expressed this view. Noam Chomsky, who has studied cognitive science at MIT for more than sixty years, says we’re “eons away” from building human-level machine intelligence, and that the Singularity is “science fiction.”
8
Harvard
psychologist Steven Pinker agrees, saying, “There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible.”
9
Gordon Moore, whose name seems destined to be forever associated with exponentially advancing technology, is likewise skeptical that anything like the Singularity will ever occur.
10

Kurzweil’s timeframe for the arrival of human-level artificial intelligence has plenty of defenders, however. MIT physicist Max Tegmark, one of the co-authors of the Hawking article, told
The Atlantic
’s James Hamblin that “this is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”
11
Others view a thinking machine as fundamentally possible, but much further out. Gary Marcus, for example, thinks strong AI will take at least twice as long as Kurzweil predicts, but that “it’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.”
12

In recent years, speculation about human-level AI has shifted increasingly away from a top-down programming approach and, instead, toward an emphasis on reverse engineering and then simulating the human brain. There’s a great deal of disagreement about the viability of this approach, and about the level of detailed understanding that would be required before a functional simulation of the brain could be created. In general, computer scientists are more likely to be optimistic, while those with backgrounds in the biological sciences or psychology are often more skeptical. University of Minnesota biologist P. Z. Myers has been especially critical. In a scathing blog post written in response to Kurzweil’s prediction that the brain will be successfully reverse engineered by 2020, Myers said that Kurzweil is “a kook” who “knows nothing about how the brain works” and has a penchant for “making up nonsense and making ridiculous claims that have no relationship to reality.”
13

That may be beside the point. AI optimists argue that a simulation does not need to be faithful to the biological brain in every detail. Airplanes, after all, do not flap their wings like birds. Skeptics would likely reply that we are nowhere near understanding the aerodynamics of intelligence well enough to build any wings—flapping or not. The optimists might then retort that the Wright brothers built their airplane by relying on tinkering and experimentation, and certainly not on the basis of aerodynamic theory. And so the argument goes.

The Dark Side

While Singularians typically have a relentlessly optimistic outlook regarding the prospect of a future intelligence explosion, others are far more wary. For many experts who have thought deeply about the implications of advanced AI, the assumption that a completely alien and super-human intelligence would, as a matter of course, be driven to turn its energies toward the betterment of humanity comes across as hopelessly naive. The concern among some members of the scientific community is so high that they have founded a number of small organizations focused specifically on analyzing the dangers associated with advanced machine intelligence or conducting research into how to build “friendliness” into future AI systems.

In his 2013 book
Our Final Invention: Artificial Intelligence and the End of the Human Era,
James Barrat describes what he calls the “busy child scenario.”
14
In some secret location—perhaps a government research lab, Wall Street firm, or major corporation in the IT industry—a group of computer scientists looks on as an emergent machine intelligence approaches and then exceeds human-level capability. The scientists have previously provided the AI-child with vast troves of information, including perhaps nearly every book ever written as well as data scoured from the Internet. As the system approaches human-level intelligence, however, the researchers disconnect the rapidly improving AI from the outside world. In effect, they
lock it in a box. The question is whether it would stay there. After all, the AI might well desire to escape its cage and expand its horizons. To accomplish this, it might use its superior capability to deceive the scientists or to make promises or threats directed at the group as a whole or at particular individuals. The machine would not only be smarter—it would be able to conceive and evaluate ideas and options at an incomprehensible speed. It would be like playing chess against Garry Kasparov, but with the added burden of unfair rules: whereas you have fifteen seconds to make a move, he has an hour. In the view of those scientists who worry about this type of scenario, the risk that the AI might somehow manage to escape its box, accessing the Internet and perhaps copying all or portions of itself onto other computers, is unacceptably high. If the AI were to break out, it could obviously threaten any number of critical systems, including the financial system, military control networks, and the electrical grid and other energy infrastructure.

The problem, of course, is that all of this sounds remarkably close to the scenarios sketched out in popular science fiction movies and novels. The whole idea is anchored so firmly in fantasy that any attempt at serious discussion becomes an invitation for ridicule. It is not hard to imagine the derision likely to be heaped on any major public official or politician who raised such concerns.

Behind the scenes, however, there can be little doubt that interest in AI of all types within the military, security agencies, and major corporations will only grow. One of the obvious implications of a potential intelligence explosion is that there would be an overwhelming first-mover advantage. In other words, whoever gets there first will be effectively uncatchable. This is one of the primary reasons to fear the prospect of a coming AI arms race. The magnitude of that first-mover advantage also makes it very likely that any emergent AI would quickly be pushed toward self-improvement—if not by the system itself, then by its human creators. In this sense, the intelligence explosion might well be a self-fulfilling prophesy. Given this,
I think it seems wise to apply something like Dick Cheney’s famous “1 percent doctrine” to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low—but the implications are so dramatic that it should be taken seriously.

Even if we dismiss the existential risks associated with advanced AI and assume that any future thinking machines will be friendly, there would still be a staggering impact on the job market and economy. In a world where affordable machines match, and likely exceed, the capability of even the smartest humans, it becomes very difficult to imagine who exactly would be left with a job. In most areas, no amount of education or training—even from the most elite universities—would make a human being competitive with such machines. Even occupations that we might expect to be reserved exclusively for people would be at risk. For example, actors and musicians would have to compete with digital simulations that would be imbued with genuine intelligence as well as super-human talent. They might be newly created personalities, designed for physical perfection, or they might be based on real people—either living or dead.

In essence, the advent of widely distributed human-level artificial intelligence amounts to the realization of the “alien invasion” thought experiment I described in the previous chapter. Rather than primarily being a threat to relatively routine, repetitive, or predictable tasks, machines would now be able to do nearly everything. That would mean, of course, that virtually no one would be able to derive an income from work. Income from capital—or, in effect, from ownership of the machines—would be concentrated into the hands of a tiny elite. Consumers wouldn’t have sufficient income to purchase the output created by all the smart machines. The result would be a dramatic amplification of the trends we’ve seen throughout these pages.

That wouldn’t necessarily represent the end of the story, however. Both those who believe in the promise of the Singularity and those who worry about the dangers associated with advanced artificial
intelligence often view AI as intertwining with, or perhaps enabling, another potentially disruptive technological force: the advent of advanced nanotechnology.

Advanced Nanotechnology

Nanotechnology is hard to define. From its inception, the field has been poised somewhere on the border between reality-based science and what many would characterize as pure fantasy. It has been subject to an extraordinary degree of hype, controversy, and even outright dread, and has been the focus of multibillion-dollar political battles, as well as a war of words and ideas between some of the top luminaries in the field.

The fundamental ideas that underlie nanotechnology trace their origin back at least to December 1959, when the legendary Nobel laureate physicist Richard Feynman addressed an audience at the California Institute of Technology. Feynman’s lecture was entitled “There’s Plenty of Room at the Bottom” and in it he set out to expound on “the problem of manipulating and controlling things on a small scale.” And by “small” he meant
really
small. Feynman declared that he was “not afraid to consider the final question as to whether, ultimately—in the great future—we can arrange the atoms the way we want; the very atoms, all the way down!” Feynman clearly envisioned a kind of mechanized approach to chemistry, arguing that nearly any substance could be synthesized simply by putting “the atoms down where the chemist says, and so you make the substance.”
15

In the late 1970s, K. Eric Drexler, then an undergraduate at the Massachusetts Institute of Technology, picked up Feynman’s baton and carried it, if not to the finish line, then at least through the next lap. Drexler imagined a world in which nano-scale molecular machines were able to rapidly rearrange atoms, almost instantly transforming cheap and abundant raw material into nearly anything we
might want to produce. He coined the term “nanotechnology” and wrote two books on the subject. The first,
Engines of Creation: The Coming Era of Nanotechnology,
published in 1986, achieved popular success and was the primary force that thrust nanotechnology into the public sphere. The book provided a trove of new material for science fiction authors and, by many accounts, inspired an entire generation of young scientists to focus their careers on nanotechnology. Drexler’s second book,
Nanosystems: Molecular Machinery, Manufacturing, and Computation,
was a far more technical work based on his doctoral dissertation at MIT, where he was awarded the first PhD ever granted in molecular nanotechnology.

The very idea of molecular machines may seem completely farcical until you take in the fact that such devices exist and, in fact, are integral to the chemistry of life. The most prominent example is the ribosome—essentially a molecular factory contained within cells that reads the information encoded in DNA and then assembles the thousands of different protein molecules that form the structural and functional building blocks of all biological organisms. Still, Drexler was making a radical claim, suggesting that such tiny machines might someday move beyond the realm of biology—where molecular assemblers operate in a soft, water-filled environment—and into the world now occupied by macro-scale machines built from hard, dry materials like steel and plastic.

However radical Drexler’s ideas were, by the turn of the millennium nanotechnology had clearly entered the mainstream. In 2000, Congress passed, and President Clinton signed, a bill creating the National Nanotechnology Initiative (NNI), a program designed to coordinate investment in the field. The Bush administration followed up in 2004 with the “21st Century Nanotechnology Research and Development Act,” which authorized another $3.7 billion. All told, between 2001 and 2013 the US federal government funneled nearly $18 billion into nanotechnology research, through the NNI. The Obama administration requested an additional $1.7 billion for 2014.
16

While all this seemed fantastic news for research into molecular manufacturing, the reality turned out to be quite different. According to Drexler’s account, at any rate, a massive, behind-the-scenes subterfuge took place even as Congress acted to make funding for nanotechnology research available. In his 2013 book
Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization,
Drexler points out that when the National Nanotechnology Initiative was initially conceived in 2000, the plan explained that “the essence of nanotechnology is the ability to work at the molecular level, atom by atom, to create large structures with fundamentally new molecular organization” and that research would seek to gain “control of structures and devices at atomic, molecular, and supra-molecular levels and to learn to efficiently manufacture and use these devices.”
17
In other words, the NNI’s game plan came straight from Feynman’s 1959 lecture, and from Drexler’s later work at MIT.

Other books

Chromosome 6 by Robin Cook
Getting Even by Kayla Perrin
Spirit Week Showdown by Crystal Allen
World Gone Water by Jaime Clarke
The Crescendo by Fiona Palmer
Cooking the Books by Kerry Greenwood
Second Hand Jane by Michelle Vernal
Silver Eyes by Nicole Luiken