Read Alone Together Online

Authors: Sherry Turkle

Alone Together (17 page)

BOOK: Alone Together
6.13Mb size Format: txt, pdf, ePub
ads
Tucker wishes he himself were stronger and projects this wish onto AIBO: he likes to talk about the robot as a superhero dog that shows up the limitations of his biological dog. Tucker says, “AIBO is probably as smart as Reb and at least he isn’t as scared as my dog.” While freely celebrating AIBO’s virtues, Tucker avoids answering any questions about what Reb can do that AIBO cannot. I am reminded of Chelsea, who, once having decided that a calm robot might be more comforting to her grandmother than her own anxious and talkative self, could not be engaged on what only she had to offer.
So, it is not uncommon for AIBO to do a trick and for Tucker to comment, “My dog couldn’t do that.” AIBO is the better dog, and we hear why. AIBO is alive even if his heart is made of batteries and wires. AIBO will never get sick or die. In fact, AIBO is everything that Tucker wishes to be. Tucker identifies with AIBO as a being that can resist death through technology. AIBO gives Tucker the idea that people, like this robot, may someday be recharged and rewired. Just as no blood is needed for AIBO’s heart to feel emotion, batteries and wires might someday keep a person alive. Tucker uses care for AIBO to dream himself into a cyborg future.
At one point Tucker says that he “would miss AIBO as much as Reb if either of them died.” Tucker seems startled when he realizes that in fantasy he has allowed that AIBO could die. He immediately explains that AIBO
could
die but does not
have
to die. And AIBO will
not
die if Tucker protects him. In this moment of poignant identification, Tucker sees AIBO as both potentially immortal and a creature like him, someone who needs to be kept out of harm’s way. In Tucker’s case, precautions have often been futile. Despite the best of care, he has often landed in the hospital. In AIBO’s case, Tucker believes that precautions will work. They will require vigilance. Tucker tells us his elaborate plans to care for the robot when he takes it home. As he speaks, Tucker’s anxiety about AIBO’s possible death comes through: “He’ll probably be in my room most of the time. And I’m probably going to keep him downstairs so he doesn’t fall down the stairs. Because he probably, in a sense he would die if he fell down the stairs. Because he could break.”
After the robot goes home with him, Tucker reports on their progress. On AIBO’s first day, Tucker says, “AIBO was charging and probably didn’t miss me.” By the second day, Tucker is sure that AIBO cares. But of course, AIBO is not always at his best, something that helps Tucker identify with the robot, for Tucker, too, has good and bad days. Tucker says that after he returns his AIBO, he will miss the robot and that the robot “will probably miss me.”
With AIBO at home, Tucker dreams up duels between the robot and his Bio Bugs. Bio Bugs are robot creatures that can walk and engage in combat with each other, gaining “survival skills” along the way. They can end up very aggressive. With great excitement, Tucker describes their confrontations with AIBO. The battles between AIBO and the Bio Bugs seem to reassure him that, no matter what, AIBO will survive. It reinforces the image of the robot as a life form able to defy death, something Tucker would like to become. The “bugs” are the perfect representation of a bacterium or virus, such as those that Tucker continually fights off. AIBO easily defeats them.
When it is time to return the robot, Tucker seems concerned that his healthy older brother, Connor, twelve, barely played with AIBO during the weeks they had the robot at home. Tucker brings this up with a shaky voice. He explains that his brother didn’t play with the robot because “he didn’t want to get addicted to him so he would be sad when we had to give him back.” Tucker wishes that he had more of his brother’s attention; the two are not close. Tucker fears that his brother does not spend time with him because he is so frail. In general, he worries that his illness keeps people away because they don’t want to invest in him. AIBO, too, is only passing through their home. Tucker is upset by Connor’s hesitancy to bond with something “only passing in his life.” Tucker tells us that he is making the most of his time with AIBO.
Callie and Tucker nurture robots that offer a lot more room for relationship than Furbies and Tamagotchis. Yet, both My Real Baby and AIBO are commercially available pastimes. I’ve studied other children who come to MIT laboratories to visit more advanced robots. These robots are not toys; they have their own toys. Grown-ups don’t just play with them; these robots have their own grown-up attendants. Is this a game for grown-ups or a more grown-up game? Is it a game at all? To treat these robots as toys is to miss the point—and even the children know it.
CHAPTER 5
 
Complicities
 
I
first met Cog in July 1994, in Rodney Brooks’s Artificial Intelligence Laboratory at MIT. The institute was hostin g an artificial-life workshop, a conference that buzzed with optimism about science on its way to synthesizing what contributors called “the living state.” Breathtaking though they were in capturing many of the features of living systems, most of the “life forms” this field had developed had no physical presence more substantial than images on a computer screen; these creatures lived in simulation. Not so Cog, a life-size human torso, with mobile arms, neck, and head.
Cog grew out of a long research tradition in Brooks’s lab. He and his colleagues work with the assumption that much of what we see as complex behavior is made up of simple responses to a complex environment. Consider how artificial intelligence pioneer Herbert Simon describes an ant walking across a sand dune: the ant is not thinking about getting from point A to point B. Instead, the ant, in its environment, follows a simple set of rules: keep moving and avoid obstacles. After more than fifteen years of using this kind of strategy to build robots that aspired to insect-level intelligence, Brooks said he was ready “to go for the whole iguana.”
1
In the early 1990s, Brooks and his team began to build Cog, a robotic two-year-old. The aspiration was to have Cog “learn” from its environment, which included the many researchers who dedicated themselves to its education. For some, Cog was a noble experiment on the possibilities of embodied, “emergent” intelligence. For others, it was a grandiose fantasy. I decided to see for myself.
I went to Brooks’s lab with Christopher Langton, one of the founders of the field of artificial life—indeed, the man who had coined the term. In town from New Mexico for the A-Life conference, Langton was as eager as I to see the robot. At the AI lab, robot parts were stacked in compartments and containers; others were strewn about in riots of color. In the midst of it all was Cog, on a pedestal, immobile, almost imperial—a humanoid robot, one of the first, its face rudimentary, but with piercing eyes.
Trained to track the movement of human beings (typically those objects whose movements are not constant), Cog “noticed” me soon after I entered the room. Its head turned to follow me, and I was embarrassed to note that this made me happy—unreasonably happy. In fact, I found myself competing with Langton for the robot’s attention. At one point, I felt sure that Cog’s eyes had “caught” my own, and I experienced a sense of triumph. It was noticing
me
, not its other guest. My visit left me surprised—not so much by what Cog was able to accomplish but by my own reaction to it. For years, whenever I had heard Brooks speak about his robotic “creatures,” I had always been careful to mentally put quotation marks around the word. But now, with Cog, I had an experience in which the quotation marks disappeared. There I stood in the presence of a robot and I wanted it to favor me. My response was involuntary, I am tempted to say visceral. Cog had a face, it made eye contact, and it followed my movements. With these three simple elements in play, although I knew Cog to be a machine, I had to fight my instinct to react to “him” as a person.
MECHANICAL TODDLERS
 
Cog’s builders imagined a physically agile toddler that responds to what it sees, touches, and hears. An adjacent laboratory houses another robot designed to simulate that toddler’s emotions. This is the facially and vocally expressive Kismet, with large doll eyes and eyelashes and red rubber tubing lips. It speaks in a soft babble that mimics the inflections of human speech. Kismet has a range of “affective” states and knows how to take its turn in conversation. It can repeat a requested word, most often to say its own name or to learn the name of the person talking to it.
2
Like Cog, Kismet learns through interaction with people. Brooks and his colleagues hoped that by building learning systems, we would learn about learning.
3
And robots that learn through social interaction are the precursors to machines that can actively collaborate with people. A sociable robot would, for example, know how to interpret human signaling. So, to warn an astronaut of danger, a robot working alongside could lift the palm of its hand in that universal cue that says “stop.” And the person working with the robot could also communicate with simple gestures.
4
But more than marking progress toward such practical applications, Cog and Kismet generate feelings of kinship. We’ve already seen that when this happens, two ideas become more comfortable. The first is that people are not so different from robots; that is, people are built from information. The second is that robots are not so different from people; that is, robots are more than the sum of their machine parts.
From its very beginnings, artificial intelligence has worked in this space between a mechanical view of people and a psychological, even spiritual, view of machines. Norbert Weiner, the founder of cybernetics, dreamed in the 1960s that it was “conceptually possible for a human being to be sent over a telegraph line,” while in the mid-1980s, one MIT student mused that his teacher, AI pioneer Marvin Minsky, really wanted to “create a computer beautiful enough that a soul would want to live in it.”
5
Whether or not a soul is ready to inhabit any of our current machines, reactions to Cog and Kismet bring this fantasy to mind. A graduate student, often alone at night in the lab with Kismet, confides, “I say to myself it’s just a machine, but then after I leave, I want to check on it at night, just to make sure it’s okay.” Not surprisingly, for we have seen this as early as the ELIZA program, both adults and children are drawn to do whatever it takes to sustain a view of these robots as sentient and even caring.
6
This complicity enlivens the robots, even as the people in their presence are enlivened, sensing themselves in a relationship.
Over the years, some of my students have even spoken of time with Cog and Kismet by referring to a robotic “I and thou.”
7
Theologian Martin Buber coined this phrase to refer to a profound meeting of human minds and hearts. It implies a symmetrical encounter. There is no such symmetry between human beings and even the most advanced robots. But even simple actions by Cog and Kismet inspire this extravagance of description, touching, I think, on our desire to believe that such symmetry is possible. In the case of Cog, we build a “thou” through the body. In the case of Kismet, an expressive face and voice do the work. And both robots engage with the power of the gaze. A robotic face is an enabler; it encourages us to imagine that robots can put themselves in our place and that we can put ourselves in theirs.
8
When a robot holds our gaze, the hardwiring of evolution makes us think that the robot is interested in us. When that happens, we feel a possibility for deeper connection. We want it to happen. We come to sociable robots with the problems of our lives, with our needs for care and attention. They promise satisfactions, even if only in fantasy. Getting satisfaction means helping the robots, filling in where they are not yet ready, making up for their lapses. We are drawn into necessary complicities.
I join with Brian Scassellati and Cynthia Breazeal, the principal designers for Cog and Kismet respectively, on a study of children’s encounters with these robots.
9
We introduce them to sixty children, from ages five to fourteen, from a culturally and economically diverse cross section of local communities. We call it our “first-encounters” study because in most cases, the children meet Cog or Kismet just once and have never previously seen anything like them.
When children meet these robots, they quickly understand that these machines are not toys—indeed, as I have said, these robots have their
own
toys, an array of stuffed animals, a slinky, dolls, and blocks. The laboratory setting in which adults engage with the robots says, “These robots don’t belong
to
you, they belong
with
you.” It says, “They are not
for
you; in some important way, they are
like
you.” Some children wonder, if these robots belong with people, then what
failings
in people require robots? For one thirteen-year-old boy, Cog suggests that “humans aren’t good enough so they need something else.”
In our first-encounters study children’s time with the robots is unstructured. We ask questions, but not many. The children are encouraged to say whatever comes to mind. Our goal is to explore some rather open questions: How do children respond to an encounter with a novel form of social intelligence? What are they looking for?
To this last, the answer is, most simply, that children want to connect with these machines, to teach them and befriend them. And they want the robots to like, even love, them. Children speak of this directly (“Cog loves me”; “Kismet is like my sister; she loves me”; “He [Cog] is my pal; he wants to do things with me, everything with me. Like a best friend.”). Even the oldest children are visibly moved when Kismet “learns” their names, something that this robot can do but only rarely accomplishes. Children get unhappy if Kismet says the name of another child, which they often take as evidence of Kismet’s disinterest.
BOOK: Alone Together
6.13Mb size Format: txt, pdf, ePub
ads

Other books

On a Razor's Edge by K. F. Breene
Stealing Kathryn by Jacquelyn Frank
Numbers 3: Infinity by Rachel Ward
Butch Cassidy the Lost Years by William W. Johnstone
Blackwork by Monica Ferris
Forbidden Love by Maura Seger
For a Roman's Heart by Agnew, Denise A.
La señal de la cruz by Chris Kuzneski