Read Reclaiming Conversation Online
Authors: Sherry Turkle
The idea has passed to a new generation: Robots offer relationship without risk and “nothing bad is going to happen” from having a robot as a friend or, as this girl imagines it, a romantic partner. But it's helpful to challenge the simple salvations of robot companionship. We will surely confront a first problem: The time we spend with robots is time we're not spending with each other. Or with our children. Or with ourselves.
And a second problem: Although always-available robot chatter is a way to never feel alone, we will be alone,
engaged in “as-if” conversations
. What if practice makes perfect and we forget what real conversation is and why it matters? That's why I worry so much about the “crowdsourced” therapist. It is presented as a path toward an even more automated stand-in and is not afraid to use the word “therapist” or “conversation” to describe what it offers.
I
n the late 1970s, when I began my studies of computers and people, I started with children. A first generation of electronic toys and games (with their assertive displays of smarts) were just entering the mass market. In children's eyes, the new toys shared intelligence with people, but as the children saw it, people, in contrast to computers, had emotions.
People were special
because they had feelings.
A twelve-year-old said, “When there are computers who are just as smart as the people, the computers will do a lot of the jobs, but there
will still be things for the people to do. They will run the restaurants, taste the food, and they will be the ones who will love each other, have families, and love each other. I guess they'll still be the only ones who will go to church.” And in fact, in the mid-1980s and early 1990s, people of all ages found a way of saying that
although simulated thinking might be thinking, simulated feeling
is never feeling, simulated love is never love.
And then, in the late 1990s, there was a sea change. Now computer objects presented themselves as having feelings. Virtual pets such as Tamagotchis, Furbies, and AIBOs proposed themselves as playmates that asked to be cared for and behaved as though it mattered. And it was clear that it did matter to the children who cared for them. We are built to nurture what we love but also to love what we nurture.
Nurturance turns out to be a “killer app.” Once we take care of a digital creature or teach or amuse it, we become attached to it, and then behave “as if” the creature cares for us in return.
Children become so convinced that sociable robots have feelings that they are no longer willing to see people as special because of their emotional lives. I've interviewed many adults who say of children's attachment to as-if relationships: “Well, that's cute, they'll grow out of it.” But it is just as likely, more likely in fact, that children are not growing out of patterns of attachment to the inanimate, but growing
into
them.
What are children learning when they turn toward machines as confidants? A fifteen-year-old boy remarks that every person is limited by his or her life experience, but “robots can be programmed with an unlimited amount of stories.” So in his mind, as confidants, the robots win on expertise. And, tellingly, they also win on reliability. His parents are divorced. He's seen a lot of fighting at home. “People,” he says, are “risky.” Robots are “safe.” The kind of reliability they will provide is emotional reliability, which comes from their having no emotions at all.
T
o recall Marvin Minsky's student, these days we're not trying to create machines that souls would want to live
in
but machines that we would want to live
with
.
From earliest childhood, Thomas, now seventeen, says that he used video games as a place of emotional comfort, “a place to go.” Thomas came to the United States from Morocco when he was eight. His father had to stay behind, and now Thomas lives with his mother and sister in a town that is more than an hour from his suburban private school. He has family all over the world and he keeps up with them through email and messaging. His relationship with his mother is quite formal. She holds down several jobs and Thomas says he doesn't want to upset her with his problems. Now, he says that when he has a problem, the characters in his video games offer concrete advice.
Thomas provides an example of how this works. One of his friends at school gave him a stolen collector's card of considerable value. Thomas was tempted to keep it but remembered that a character in one of his favorite games was also given stolen goods. In the game, Thomas says, the character returned the stolen items and so he did too. “The character went and did the right thing and returned it. And in the end, it would turn out good. So I just said, âYeah, that's good. I should probably return it, yeah.'”
Inspired by the character's actions, Thomas returned the stolen card to its rightful owner. The game helped Thomas do the right thing, but it did not offer a chance to talk about what had happened and how to move forward with his classmates, who steal with apparently no consequence and who now have reason to think he steals as well. Thomas says that at school he feels “surrounded by traitors.” It's a terrible feeling and one where talking to a person might help. But Thomas doesn't see that happening any time soon. On the contrary, in the future, he sees himself increasingly turning to machines for companionship and advice. When he says this, I feel that I've missed a beat. How did he make the leap to
artificial friendship? Thomas explains: Online, he plays games where he sometimes can't tell people and programs apart.
Thomas has a favorite computer game in which there are a lot of “non-player characters.” These are programmed agents that are designed to act as human characters in the game. These characters can be important: They can save your life, and sometimes, to proceed through the game, you have to save theirs. But every once in a while, those who designed Thomas's game turn its world upside down: The programmers of the game take the roles of the programmed characters they've created. “So, on day one, you meet some characters and they're just programs. On day two, they are people. . . . So, from day to day, you can't keep the robots straight from the people.”
When we meet, Thomas is fresh from an experience of mistaking a program for a person. It's left a big impression. He's wondering how he would feel if a “true bot”âthat is, a character played by a computer programâwanted to be his friend. He cannot articulate any objection. “If the true bot actually asked me things and acted like a natural person,” says Thomas, “then I would take it as a friend.”
In the Turing “imitation game,” to be considered intelligent, a computer had to communicate with a person (via keyboards and a teletype) and leave that person unable to tell if behind the words was a person or a machine. Turing's test is all about behavior, the ability to perform humanness. Thomas lives in this behaviorist world. There is a “Thomas test” for friendship. To be a friend, you have to
act
like a friend, like a “natural person.”
For Thomas makes it clear: He is ready to take the performance of friendship for friendship itself. He tells me that if a bot asked him, “How are you? What are you feeling? What are you thinking?” he would answer. And from there Thomas has an elaborate fantasy of what personalities would be most pleasing in his machine friends. Unlike the kids he doesn't get along with at school, his machine friends will be honest. They will offer companionship without tension and difficult moral choices. The prospect seems, as he puts it, “relaxing.”
This is the robotic moment, “relaxing” to a seventeen-year-old who
has been befriended by young thugs. If Thomas accepts programs as confidants, it is because he has so degraded what he demands of conversation that he will accept what a game bot can offer: the
performance
of honesty and companionate interest.
And then there is the question of how much we value “information.” By the first decade of the 2000s, it was easy to find high school students who thought it would be
better to talk to computer programs
about the problems of high school dating than to talk to their parents. The programs, these students explained, would have larger databases to draw on than any parent could have. But giving advice about dating involves identifying with another person's feelings. So that conversation with your father about girls might also be an occasion to discuss empathy and ethical behavior. If your father's advice about dating doesn't work out, hopefully you'll still learn things from talking to him that will help things go better when you have your next crush.
Saying that you'll let a machine “take care” of a conversation about dating means that this larger conversation won't take place. It can't. And the more we talk about conversation as something machines can do, the more we can end up devaluing conversations with peopleâbecause they don't offer what machines provide.
I hear adults and adolescents talk about infallible “advice machines” that will work with masses of data and well-tested algorithms. When we treat people's lives as ready to be worked on by algorithm, when machine advice becomes the gold standard, we learn not to feel safe with fallible people.
When I hear young people talk about the advantages of turning to robots instead of their parents, I hear children whose parents have disappointed them. A disengaged parent leaves children less able to relate to others. And when parents retreat to their phones, they seem released from the anxieties that should come from ignoring their children. In this new world, adding a caretaker robot to the mix can start to seem like not that big a deal. It may even seem like a solution. Robots appeal to distracted parents because they are already disengaged. Robots appeal to lonely children because the robots will always be there.
The most important job of childhood and adolescence is to learn attachment to and trust in other people. That happens through human attention, presence, and conversation. When we think about putting children in the care of robots, we forget that what children really need to learn is that adults are there for them in a stable and consistent way.
T
he bonds of attachment and the expression of emotion are
one for the child
. When children talk with people, they come to recognize, over time, how vocal inflection, facial expression, and bodily movement flow together. Seamlessly. Fluidly. And they learn how human emotions play in layers, again seamlessly and fluidly.
Children need to learn what complex human feelings and human ambivalence look like. And they need other people to respond to their own expressions of that complexity. These are the most precious things that people give to children in conversation as they grow up. No robot has these things to teach.
These are the things that we forget when we think about children spending any significant amount of time talking with machines, looking into robotic faces, trusting in their care. Why would we play with fire when it comes to such delicate matters?
But we do. It's part of a general progression that I've called “from better than nothing to better than anything.” We begin with resignation, with the idea that machine companionship is better than nothing, as in “there are no people for these jobs.” From there, we exalt the possibilities of what simulation can offer until, in time, we start to talk as though what we will get from the artificial may actually be better than what life could ever provide. Child-care workers might be abusive. Nurses or well-meaning mothers might make mistakes. Children say that a robotic dog like the AIBO pet will never get sick, and can be turned off when you want to put your attention elsewhere. And, crucially, it will never die.
Grown-ups have similar feelings. A robot dog, says an older woman, “won't die suddenly, abandon you, and make you very sad.”
In our new culture of connection, we are lonely but afraid of intimacy. Fantasies of “conversation” with artificial beings solve a dilemma. They propose the illusion of companionship without the demands of friendship. They allow us to imagine a friction-free version of friendship. One whose demands are in our control, perhaps literally.
I've said that part of what makes our new technologies of connection so seductive is that they respond to our fantasies, our wishes, that we will always be heard, that we can put our attention wherever we want it to be, and that we will never have to be alone. And, of course, they respond to an implied fourth fantasy: that we will never have to be bored.
When people voice these fantasies, they are also describing, often without realizing it, a relationship with a robot. The robot would always be at attention, and it would be tolerant of wherever your attention might take you. It certainly wouldn't mind if you interrupted your conversation to answer a text or take a call. And it would never abandon you, although there is the question of whether it was ever really there in the first place. As for boredom, well, it would do its best to make boredom, for you, a thing of the past.
If, like Tara, we choose to share our frustrations with robot friends because we don't want to upset our human friends with who we really are and what we're really feeling, the meaning of human friendship will change. It may become the place you go for small talk. You'd be afraid that people would be tired out by big talk. This means that there won't be any more big talk because robots won't understand it.
Yet so many people talk to me about their hope that someday, not too far down the road, an advanced version of Siri will be like a best friend. One who will listen when others won't. I believe this wish reflects a painful truth I've learned in my years of research: The feeling that “no one is listening to me” plays a large part in our relationships with technology. That's why it is so appealing to have a Facebook page or a Twitter feedâso many automatic listeners. And that feeling that “no
one is listening to me” makes us want to spend time with machines that seem to care about us. We are willing to take their performances of caring and conversation at “interface value.”