You are not a Gadget: A Manifesto (6 page)

BOOK: You are not a Gadget: A Manifesto
13.96Mb size Format: txt, pdf, ePub
ads

Turing replaced the woman with a computer. Can the judge tell which is the man? If not, is the computer conscious? Intelligent? Does it deserve equal rights?

It’s impossible for us to know what role the torture Turing was enduring at the time played in his formulation of the test. But it is undeniable that one of the key figures in the defeat of fascism was destroyed, by our
side, after the war, because he was gay. No wonder his imagination pondered the rights of strange creatures.

When Turing died, software was still in such an early state that no one knew what a mess it would inevitably become as it grew. Turing imagined a pristine, crystalline form of existence in the digital realm, and I can imagine it might have been a comfort to imagine a form of life apart from the torments of the body and the politics of sexuality. It’s notable that it is the woman who is replaced by the computer, and that Turing’s suicide echoes Eve’s fall.

The Turing Test Cuts Both Ways

Whatever the motivation, Turing authored the first trope to support the idea that bits can be alive on their own, independent of human observers. This idea has since appeared in a thousand guises, from artificial intelligence to the hive mind, not to mention many overhyped Silicon Valley start-ups.

It seems to me, however, that the Turing test has been poorly interpreted by generations of technologists. It is usually presented to support the idea that machines can attain whatever quality it is that gives people consciousness. After all, if a machine fooled you into believing it was conscious, it would be bigoted for you to still claim it was not.

What the test really tells us, however, even if it’s not necessarily what Turing hoped it would say, is that machine intelligence can only be known in a relative sense, in the eyes of a human beholder.
*

The AI way of thinking is central to the ideas I’m criticizing in this
book. If a machine can be conscious, then the computing cloud is going to be a better and far more capacious consciousness than is found in an individual person. If you believe this, then working for the benefit of the cloud over individual people puts you on the side of the angels.

But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?

People degrade themselves in order to make machines seem smart all the time. Before the crash, bankers believed in supposedly intelligent algorithms that could calculate credit risks before making bad loans. We ask teachers to teach to standardized tests so a student will look good to an algorithm. We have repeatedly demonstrated our species’ bottomless ability to lower our standards to make information technology look good. Every instance of intelligence in a machine is ambiguous.

The same ambiguity that motivated dubious academic AI projects in the past has been repackaged as mass culture today. Did that search engine really know what you want, or are you playing along, lowering your standards to make it seem clever? While it’s to be expected that the human perspective will be changed by encounters with profound new technologies, the exercise of treating machine intelligence as real requires people to reduce their mooring to reality.

A significant number of AI enthusiasts, after a protracted period of failed experiments in tasks like understanding natural language, eventually found consolation in the adoration for the hive mind, which yields better results because there are real people behind the curtain.

Wikipedia, for instance, works on what I call the Oracle illusion, in which knowledge of the human authorship of a text is suppressed in order to give the text superhuman validity. Traditional holy books work in precisely the same way and present many of the same problems.

This is another of the reasons I sometimes think of cybernetic totalist culture as a new religion. The designation is much more than an approximate metaphor, since it includes a new kind of quest for an afterlife. It’s so weird to me that Ray Kurzweil wants the global computing cloud to
scoop up the contents of our brains so we can live forever in virtual reality. When my friends and I built the first virtual reality machines, the whole point was to make this world more creative, expressive, empathic, and interesting. It was not to escape it.

A parade of supposedly distinct “big ideas” that amount to the worship of the illusions of bits has enthralled Silicon Valley, Wall Street, and other centers of power. It might be Wikipedia or simulated people on the other end of the phone line. But really we are just hearing Turing’s mistake repeated over and over.

Or Consider Chess

Will trendy cloud-based economics, science, or cultural processes outpace old-fashioned approaches that demand human understanding? No, because it is only encounters with human understanding that allow the contents of the cloud to exist.

Fragment liberation culture breathlessly awaits future triumphs of technology that will bring about the Singularity or other imaginary events. But there are already a few examples of how the Turing test has been approximately passed, and has reduced personhood. Chess is one.

The game of chess possesses a rare combination of qualities: it is easy to understand the rules, but it is hard to play well; and, most important, the urge to master it seems timeless. Human players achieve ever higher levels of skill, yet no one will claim that the quest is over.

Computers and chess share a common ancestry. Both originated as tools of war. Chess began as a battle simulation, a mental martial art. The design of chess reverberates even further into the past than that—all the way back to our sad animal ancestry of pecking orders and competing clans.

Likewise, modern computers were developed to guide missiles and break secret military codes. Chess and computers are both direct descendants of the violence that drives evolution in the natural world, however sanitized and abstracted they may be in the context of civilization. The drive to compete is palpable in both computer science and chess, and when they are brought together, adrenaline flows.

What makes chess fascinating to computer scientists is precisely that
we’re bad at it. From our point of view, human brains routinely do things that seem almost insuperably difficult, like understanding sentences—yet we don’t hold sentence-comprehension tournaments, because we find that task too easy, too ordinary.

Computers fascinate and frustrate us in a similar way. Children can learn to program them, yet it is extremely difficult for even the most accomplished professional to program them well. Despite the evident potential of computers, we know full well that we have not thought of the best programs to write.

But all of this is not enough to explain the outpouring of public angst on the occasion of Deep Blue’s victory in May 1997 over world chess champion Gary Kasparov, just as the web was having its first major influences on popular culture. Regardless of all the old-media hype, it was clear that the public’s response was genuine and deeply felt. For millennia, mastery of chess had indicated the highest, most refined intelligence—and now a computer could play better than the very best human.

There was much talk about whether human beings were still special, whether computers were becoming our equal. By now, this sort of thing wouldn’t be news, since people have had the AI way of thinking pounded into their heads so much that it is sounding like believable old news. The AI way of framing the event was unfortunate, however. What happened was primarily that a team of computer scientists built a very fast machine and figured out a better way to represent the problem of how to choose the next move in a chess game. People, not machines, performed this accomplishment.

The Deep Blue team’s central victory was one of clarity and elegance of thought. In order for a computer to beat the human chess champion, two kinds of progress had to converge: an increase in raw hardware power and an improvement in the sophistication and clarity with which the decisions of chess play are represented in software. This dual path made it hard to predict the year, but not the eventuality, that a computer would triumph.

If the Deep Blue team had not been as good at the software problem, a computer would still have become the world champion at some later date, thanks to sheer brawn. So the suspense lay in wondering not whether a chess-playing computer would ever beat the best human chess
player, but to what degree programming elegance would play a role in the victory. Deep Blue won earlier than it might have, scoring a point for elegance.

The public reaction to the defeat of Kasparov left the computer science community with an important question, however. Is it useful to portray computers themselves as intelligent or humanlike in any way? Does this presentation serve to clarify or to obscure the role of computers in our lives?

Whenever a computer is imagined to be intelligent, what is really happening is that humans have abandoned aspects of the subject at hand in order to remove from consideration whatever the computer is blind to. This happened to chess itself in the case of the Deep Blue-Kasparov tournament.

There is an aspect of chess that is a little like poker—the staring down of an opponent, the projection of confidence. Even though it is relatively easier to write a program to “play” poker than to play chess, poker is really a game centering on the subtleties of nonverbal communication between people, such as bluffing, hiding emotion, understanding your opponents’ psychologies, and knowing how to bet accordingly. In the wake of Deep Blue’s victory, the poker side of chess has been largely overshadowed by the abstract, algorithmic aspect—while, ironically, it was in the poker side of the game that Kasparov failed critically.

Kasparov seems to have allowed himself to be spooked by the computer, even after he had demonstrated an ability to defeat it on occasion. He might very well have won if he had been playing a human player with exactly the same move-choosing skills as Deep Blue (or at least as Deep Blue existed in 1997). Instead, Kasparov detected a sinister stone face where in fact there was absolutely nothing. While the contest was not intended as a Turing test, it ended up as one, and Kasparov was fooled.

As I pointed out earlier, the idea of AI has shifted the psychological projection of adorable qualities from computer programs alone to a different target: computer-plus-crowd constructions. So, in 1999 a wikilike crowd of people, including chess champions, gathered to play Kasparov in an online game called “Kasparov versus the World.” In this case Kasparov won, though many believe that it was only because of back-stabbing between members of the crowd. We technologists are ceaselessly
intrigued by rituals in which we attempt to pretend that people are obsolete.

The attribution of intelligence to machines, crowds of fragments, or other nerd deities obscures more than it illuminates. When people are told that a computer is intelligent, they become prone to changing themselves in order to make the computer appear to work better, instead of demanding that the computer be changed to become more useful. People already tend to defer to computers, blaming themselves when a digital gadget or online service is hard to use.

Treating computers as intelligent, autonomous entities ends up standing the process of engineering on its head. We can’t afford to respect our own designs so much.

The Circle of Empathy

The most important thing to ask about any technology is how it changes people. And in order to ask that question I’ve used a mental device called the “circle of empathy” for many years. Maybe you’ll find it useful as well. (The Princeton philosopher often associated with animal rights, Peter Singer, uses a similar term and idea, seemingly a coincident coinage.)

An imaginary circle of empathy is drawn by each person. It circumscribes the person at some distance, and corresponds to those things in the world that deserve empathy. I like the term “empathy” because it has spiritual overtones. A term like “sympathy” or “allegiance”
might
be more precise, but I want the chosen term to be slightly mystical, to suggest that we might not be able to fully understand what goes on between us and others, that we should leave open the possibility that the relationship can’t be represented in a digital database.

If someone falls within your circle of empathy, you wouldn’t want to see him or her killed. Something that is clearly outside the circle is fair game. For instance, most people would place all other people within the circle, but most of us are willing to see bacteria killed when we brush our teeth, and certainly don’t worry when we see an inanimate rock tossed aside to keep a trail clear.

The tricky part is that some entities reside close to the edge of the circle.
The deepest controversies often involve whether something or someone should lie just inside or just outside the circle. For instance, the idea of slavery depends on the placement of the slave outside the circle, to make some people nonhuman. Widening the circle to include all people and end slavery has been one of the epic strands of the human story—and it isn’t quite over yet.

A great many other controversies fit well in the model. The fight over abortion asks whether a fetus or embryo should be in the circle or not, and the animal rights debate asks the same about animals.

When you change the contents of your circle, you change your conception of yourself. The center of the circle shifts as its perimeter is changed. The liberal impulse is to expand the circle, while conservatives tend to want to restrain or even contract the circle.

Empathy Inflation and Metaphysical Ambiguity

Are there any legitimate reasons not to expand the circle as much as possible? There are.

To expand the circle indefinitely can lead to oppression, because the rights of potential entities (as perceived by only some people) can conflict with the rights of indisputably real people. An obvious example of this is found in the abortion debate. If outlawing abortions did not involve commandeering control of the bodies of other people (pregnant women, in this case), then there wouldn’t be much controversy. We would find an easy accommodation.

BOOK: You are not a Gadget: A Manifesto
13.96Mb size Format: txt, pdf, ePub
ads

Other books

When Love Comes by Leigh Greenwood
From Cape Town with Love by Blair Underwood, Tananarive Due, Steven Barnes
Voroshilovgrad by Serhiy Zhadan
ADDICTED TO HIM II by Linette King
Marked by the Dragon King by Caroline Hale
Over the Waters by Deborah Raney
Sea of Suspicion by Toni Anderson
Menu for Romance by Kaye Dacus