Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (3 page)

BOOK: Machines of Loving Grace
6.62Mb size Format: txt, pdf, ePub
ads

Most people know Doug Engelbart as the inventor of the mouse, but his more encompassing idea was to use a set of computer technologies to make it possible for small groups to “bootstrap” their projects by employing an array of ever more powerful software tools to organize their activities, creating what he described as the “collective IQ” that outstripped the capabilities of any single individual.
The mouse was simply a gadget to improve our ability to interact with computers.

In creating SAIL, McCarthy’s impact upon the world was equal to Engelbart’s in many ways.
People like Alan Kay and
Larry Tesler, who were both instrumental in the design of the modern personal computer, passed through his lab on their way to Xerox and subsequently to Apple Computer.
Whitfield Diffie took away ideas that would lead to the cryptographic technology that secures modern electronic commerce.

There were, however, two other technologies being developed simultaneously at SRI and SAIL that are only now beginning to have a substantial impact: robotics and artificial intelligence software.
Both of these are not only in the process of transforming economies; they are fostering a new era of intelligent machines that is fundamentally changing the way we live.

The impact of both computing and robotics had been forecast before these laboratories were established.
Norbert Wiener invented the concept of cybernetics at the very dawn of the computing era in 1948.
In his book
Cybernetics,
he outlined a new engineering science of control and communication that foreshadowed both technologies.
He also foresaw the implications of these new engineering disciplines, and two years after he wrote
Cybernetics,
his companion book,
The Human Use of Human Beings,
explored both the value and the danger of automation.

He was one of the first to foresee the twin possibilities that information technology might both escape human control and come to control human beings.
More significantly he posed an early critique of the arrival of machine intelligence: the danger of passing decisions on to systems that, incapable of thinking abstractly, would make decisions in purely utilitarian terms rather than in consideration of richer human values.

Engelbart worked as an electronics technician at NASA’s Ames Research Center during the 1950s, and he had watched as aeronautical engineers first built small models to test in a wind tunnel and then scaled them up into full-sized airplanes.
He quickly realized that the new silicon computer circuits could
be scaled in the opposite direction—down into what would become known as the “microcosm.”
By shrinking the circuitry it would be possible to place more circuits in the same space for the same cost.
And dramatically, each time the circuit density increased, performance improvement would not be additive, but rather multiplicative.
For Engelbart, this was a crucial insight.
Within a year after the invention of the modern computer chip in the late 1950s he understood that there would ultimately be enough cheap and plentiful computing power to change the face of humanity.

This notion of exponential change—Moore’s law, for example—is one of the fundamental contributions of Silicon Valley.
Computers, Engelbart and Moore saw, would become more powerful ever more quickly.
Equally dramatically, their cost would continue falling, not incrementally, but also at an accelerating rate, to the point where soon remarkably powerful computers would be affordable by even the world’s poorest people.
During the past half decade that acceleration has led to rapid improvement in technologies that are necessary components for artificial intelligence: computer vision, speech recognition, and robotic touch and manipulation.
Machines now also taste and smell, but recently more significant innovations have come from modeling human neurons in electronic circuits, which has begun to yield advances in pattern recognition—mimicking human cognition.

The quickening pace of AI innovation has led some, such as Rice University computer scientist Moshe Vardi, to proclaim the imminent end of a very significant fraction of all tasks performed by humans, perhaps as soon as 2045.
2
Even more radical voices argue that computers are evolving at such a rapid pace that they will outstrip the intellectual capabilities of humans in one, or at the most two more generations.
The science-fiction author and computer scientist Vernor Vinge posed the notion of a computing “singularity” in which machine intelligence will make such rapid progress that it will
cross a threshold and then in some as yet unspecified leap, become superhuman.

It is a provocative claim, but far too early to answer definitively.
Indeed, it is worthwhile recalling the point made by longtime Silicon Valley observer Paul Saffo when thinking about the compounding impact of computing.
“Never mistake a clear view for a short distance,” he has frequently reminded the Valley’s digerati.
For those who believe that human labor will be obsolete in the space of a few decades, it’s worth remembering that even against the background of globalization and automation, between 1980 and 2010, the U.S.
labor force actually continued to expand.
Economists Frank Levy and Richard J.
Murnane recently pointed out that since 1964 the economy has actually added seventy-four million jobs.
3

MIT economist David Autor has offered a detailed explanation of the consequences of the current wave of automation.
Job destruction is not across the board, he argues, but instead has focused on the routinized tasks performed by those in the middle of the job structure—the post–World War II white-collar expansion.
The economy has continued to expand at both the bottom and the top of the pyramid, leaving the middle class vulnerable while expanding markets for both menial and expert jobs.

Rather than extending that debate here, however, I am interested in exploring a different question first posed by Norbert Wiener in his early alarms about the introduction of automation.
What will the outcome of McCarthy’s and Engelbart’s differing approaches be?
What are the consequences of the design decisions made by today’s artificial intelligence researchers and roboticists, who, with ever greater ease, can choose between extending and replacing the “human in the loop” in the systems and products they create?
By the same token, what are the social consequences of building intelligent systems that substitute for or interact with humans in business, entertainment, and day-to-day activities?

Two distinct technical communities with separate traditions, values, and priorities have emerged in the computing world.
One, artificial intelligence, has relentlessly pressed ahead toward the goal of automating the human experience.
The other, the field of human-computer interaction, or HCI, has been more concerned with the evolution of the idea of “man-machine symbiosis” that was foreseen by pioneering psychologist J.
C.
R.
Licklider at the dawn of the modern computing era as an interim step on the way to brilliant machines.
Significantly, Licklider, as director of DARPA’s Information Projects Techniques Office in the mid-1960s, would be an early funder of both McCarthy and Engelbart.
It was the Licklider era that would come to define the period when the Pentagon agency operated as a truly “blue-sky” funding organization, a period when, many argue, the agency had its most dramatic impact.

Wiener had raised an early alert about the relationship between man and computing machines.
A decade later Licklider pointed to the significance of the impending widespread use of computing and how the arrival of computing machines was different from the previous era of industrialization.
In a darker sense Licklider also forecast the arrival of the Borg of
Star Trek
notoriety.
The Borg, which entered popular culture in 1988, was a proposed cybernetic alien species that assembles into a “hive mind” in which the collective subsumes the individual, intoning the phrase, “You will be assimilated.”

Licklider wrote in 1960 about the distance between “mechanically extended man” and “artificial intelligence,” and warned about the early direction of automation technology: “If we focus upon the human operator within the system, however, we see that, in some areas of technology, a fantastic change has taken place during the last few years.
‘Mechanical extension’ has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped.
In some instances, particularly in large computer-centered
information and control systems, the human operators are responsible mainly for functions that it proved infeasible to automate.”
4
That observation seems fatalistic in accepting the shift toward automation rather than augmentation.

Licklider, like McCarthy a half decade later, was confident that the advent of “Strong” artificial intelligence—a machine capable of at least matching wits and self-awareness with a human—was likely to arrive relatively soon.
The period of man-machine “symbiosis” might only last for less than two decades, he wrote, although he allowed that the arrival of truly smart machines that were capable of rivaling thinking humans might not happen for a decade, or perhaps fifty years.

Ultimately, although he posed the question of whether humans will be freed or enslaved by the Information Age, he chose not to directly address it.
Instead he drew a picture of what has become known as a “cyborg”—part human, part machine.
In Licklider’s view human operators and computing equipment would blend together seamlessly to become a single entity.
That vision has since been both celebrated and reviled.
But it still begs the unanswered question—will we be masters, slaves, or partners of the intelligent machines that are appearing today?

Consider the complete spectrum of human-machine interactions from simple “FAQbots” to Google Now and Apple’s Siri.
Moving into the unspecified future in the movie
Her,
we see an artificial intelligence, voiced by Scarlett Johansson, capable of carrying on hundreds of simultaneous, intimate, human-level conversations.
Google Now and Siri currently represent two dramatically different computer-human interaction styles.
While Siri intentionally and successfully mimics a human, complete with a wry sense of humor, Google Now opts instead to function as a pure information oracle, devoid of personality or humanity.

It is tempting to see the personalities of the two competing corporate chieftains in these contrasting approaches.
At
Apple, Steve Jobs saw the potential in Siri before it was even capable of recognizing human speech and focused his designers on natural language as a better way to control a computer.
At Google, Larry Page, by way of contrast, has resisted portraying a computer in human form.

How far will this trend go?
Today it is anything but certain.
Although we are already able to chatter with our cars and other appliances using limited vocabularies, computer speech and voice understanding is still a niche in the world of “interfaces” that control the computers that surround us.
Speech recognition clearly offers a dramatic improvement in busy-hand, busy-eye scenarios for interacting with the multiplicity of Web services and smartphone applications that have emerged.
Perhaps advances in brain-computer interfaces will prove to be useful for those unable to speak or when silence or stealth is needed, such as card counting in blackjack.
The murkier question is whether these cybernetic assistants will eventually pass the Turing test, the metric first proposed by mathematician and computer scientist Alan Turing to determine if a computer is “intelligent.”
Turing’s original 1951 paper has spawned a long-running philosophical discussion and even an annual contest, but today what is more interesting than the question of machine intelligence is what the test implies about the relationship between humans and machines.

Turing’s test consisted of placing a human before a computer terminal to interact with an unknown entity through typewritten questions and answers.
If, after a reasonable period, the questioner was unable to determine whether he or she was communicating with a human or a machine, then the machine could be said to be “intelligent.”
Although it has several variants and has been widely criticized, from a sociological point of view the test poses the right question.
In other words, it is relevant with respect to the human, not the machine.

In the fall of 1991 I covered the first of a series of Turing test contests sponsored by a New York City philanthropist,
Hugh Loebner.
The event was first held at the Boston Computer Museum and attracted a crowd of computer scientists and a smattering of philosophers.
At that point the “bots,” software robots designed to participate in the contest, weren’t very far advanced beyond the legendary Eliza program written by computer scientist Joseph Weizenbaum during the 1960s.
Weizenbaum’s program mimicked a Rogerian psychologist (a human-centered form of psychiatry focused on persuading a patient to talk his or her way toward understanding his or her actual feelings) and he was horrified to discover that his students had become deeply immersed in intimate conversations with his first, simple bot.

BOOK: Machines of Loving Grace
6.62Mb size Format: txt, pdf, ePub
ads

Other books

Vicki & Lara by Raven ShadowHawk
Artifact by Gigi Pandian
Deeds of Honor by Moon, Elizabeth
Dead in the Water by Dana Stabenow
Evolution Impossible by Dr John Ashton
Kathryn Smith by In The Night
Pretty Birds by Scott Simon
The Train to Lo Wu by Jess Row