Emotional Design (20 page)

Read Emotional Design Online

Authors: Donald A. Norman

BOOK: Emotional Design
10.87Mb size Format: txt, pdf, ePub
Byron Reeves and Clifford Nass have done numerous experiments that demonstrate—as the subtitle of their book puts it—“how people treat computers, television, and new media like real people and places.” B. J. Fogg shows how people think of “computers as social actors,” in his chapter of that title in his
Persuasive Technology
. Fogg proposes five primary social cues that people use to infer sociability with the person, or device, with whom, or which, they are interacting:
Physical:
Face, eyes, body, movement
Psychological:
Preferences, humor, personality, feelings, empathy, “I'm sorry”
Language:
Interactive language use, spoken language, language recognition
Social Dynamics:
Turn taking, cooperation, praise for good work, answering questions, reciprocity
Social Roles:
Doctor, teammate, opponent, teacher, pet, guide
With the chair in
figure 5.1
, we succumb to the physical side. With computers, we often fall for the social dynamics (or, as is more often the case, the inept social dynamics). Basically, if something interacts with us, we interpret that interaction; the more responsive it is to us through its body actions, its language, its taking of turns, and its general responsiveness, the more we treat it like a social actor. This list applies to everything, human or animal, animate or non-animate.
Note that just as we infer the mental intentions of a chair without any real basis, we do the same for animals and other people. We don't have any more access to another person's mind than we do to the mind of an animal or chair. Our judgments of others are private interpretations based on observation and inference, not much different, really, than the evidence that makes us feel sorry for the poor chair. In fact, we don't have all that much information about the workings of our own minds. Only the reflective level is conscious: most of our motivations, beliefs, and feelings operate at the visceral and behavioral levels, below the level of awareness. The reflective level tries hard to make sense of the actions and behavior of the subconscious. But in fact, most of our behavior is subconscious and unknowable. Hence the need for others to aid us in times of trouble, for psychiatrists, psychologists, and analysts. Hence Sigmund Freud's historically impressive descriptions of the workings of id, ego, and superego.
So interpret we do, and over the many thousands or millions of years of evolution, we have coevolved muscle systems that display our emotions, and perceptual systems that interpret those of others. And with that interpretation also comes emotional judgment and empathy. We interpret, we emote. We can thereby believe that the object of our interpretations is sad or happy, angry or calm, sneaky or
embarrassed. And, in turn, we ourselves can become emotional just by our interpretation of others. We cannot control those initial interpretations, for they come automatically, built in at the visceral level. We can control the final emotions through reflective analysis, but those initial impressions are subconscious and automatic. But, more important, it is this behavior that greases the wheels of social interaction, that makes it possible.
Designers take note. Humans are predisposed to anthropomorphize, to project human emotions and beliefs into anything. On the one hand, the anthropomorphic responses can bring great delight and pleasure to the user of a product. If everything works smoothly, fulfilling expectations, the affective system responds positively, bringing pleasure to the user. Similarly, if the design itself is elegant, beautiful, or perhaps playful and fun, once again the affective system reacts positively. In both cases, we attribute our pleasure to the product, so we praise it, and in extreme cases become emotionally attached to it. But when the behavior is frustrating, when the system appears to be recalcitrant, refusing to behave properly, the result is negative affect, anger, or worse, even rage. We blame the product. The principles for designing pleasurable, effective interaction between people and products are the very same ones that support pleasurable and effective interaction between individuals.
Blaming Inanimate Objects
It starts out with slight annoyance, then the hairs on your neck start to prickle and your hands begin to sweat. Soon you are banging your computer or yelling at the screen, and you might well end up belting the person sitting next to you.
 
 
—Newspaper article on “Computer Rage”
Many of us have experienced the computer rage described in the epigraph. Computers can indeed be infuriating. But why? And why do
we get so angry at inanimate objects? The computer—or for that matter, any machine—doesn't intend to anger; machines have no intentions at all, at least not yet. We get angry because that's how our mind works. As far as we are concerned, we have done everything right, so the inappropriate behavior is therefore the fault of the computer. The “we” who faults the computer comes from the reflective level of our minds, the level that observes and passes judgment. Negative judgments lead to negative emotions, which can then inflame the judgments. The system for making judgments—cognition—is tightly coupled with the emotional system: each reinforces the other. The longer a problems lasts, the worse it becomes. Mild unhappiness is transformed into strong unhappiness. Unhappiness is transformed into anger, and anger into rage.
Note that when we get angry at our computer, we are assigning blame. Blame and its opposite, credit, are social judgments, assigning responsibility. This requires a more complex affective assessment than the dissatisfaction or pleasure one gets from a well- or ill-designed product. Blame or credit can come about only if we are treating the machine as if it were a causal agent, as if it made choices, in other words, as a human does.
How does this happen? Neither the visceral nor the behavioral level can determine causes. It is the role of reflection to understand, to interpret and find reasons, and to assign causes. Most of our rich, deepest emotions are ones where we have attributed a cause to an occurrence. These emotions originate from reflection. For example, two of the simpler emotions are hope and anxiety, hope resulting from expectation of a positive result, anxiety from expectation of something negative. If you are anxious, but the expected negative outcome doesn't happen, your emotion is one of relief. If you expect something positive, you are hopeful, and if it doesn't happen, then you feel disappointment.
So far, this is pretty simple, but suppose you—at your reflective level, to be more precise—decide that the result was someone's fault? Now we get into the complex emotions. Whose fault was it? When the
result is negative and the blame put on yourself, you get remorse, selfanger, and shame. If you blame someone else, then you feel anger and reproach.
When the result is positive and the credit yours, you get pride and gratification. When the credit is someone else's, you get gratitude and admiration. Note how emotions reflect the interaction with others. Affect and emotion constitute a complex subject, involving all three levels, with the most complex emotions dependent upon just how the reflective level attributes causes. Reflection, therefore, is at the heart of the cognitive basis of emotions. The important point is that these emotions apply equally well to things as to people, and why not? Why distinguish between animate and inanimate things? You build up expectations of behavior based upon prior experience, and if the items with which you interact fail to live up to expectations, that is a violation of trust, for which you assign blame, which can soon lead to anger.
Cooperation relies on trust. For a team to work effectively each individual needs to be able to count on team members to behave as expected. Establishing trust is complex, but it involves, among other things, implicit and explicit promises, then clear attempts to deliver, and, moreover, evidence. When someone fails to deliver as expected, whether or not trust is violated depends upon the situation and upon where the blame falls.
Simple mechanical objects can be trusted, if only because their behavior is so simple that our expectations are apt to be accurate. Yes, a support or a knife blade may break unexpectedly, but that is about the largest possible transgression a simple object can do. Complex mechanical devices can go wrong in many more ways, and many a person has fallen in love—or become outraged—over the transgressions of automobiles, shop equipment, or other complex machinery.
When it comes to a lack of trust, the worst offenders of all are today's electronic devices, especially the computer (although the cell phone is rapidly gaining ground). The problem here is that you don't know what to expect. The manufacturers promise all sorts of wonderful
results; but, in fact, the technology and its operations are invisible, mysteriously hidden from view, and often completely arbitrary, secretive, and sometimes even contradictory. Without any way of understanding how they operate or what actions they are doing, you can feel out of control and frequently disappointed. Trust eventually gives way to rage.
I believe that those of us who become angry with today's technology are justified. It may be an automatic result of our affective and emotional systems. It may not be rational, but so what? It is appropriate. Is it the computer's fault, or is it the software that runs within it? Is it really the software's fault, or is it the programmers who neglected to understand our real needs? As users of the technology, we don't care. All we care about is that our lives are made more frustrating. It is “their fault,” “their” being everyone and everything involved in the computer's development. After all, these systems do not do a very good job of gathering trust. They lose files and they crash, oftentimes for no apparent reason. Moreover, they express no shame, no blame. They don't apologize or say they are sorry. Worse, they appear to blame us, the poor unwitting users. Who is “they”? Why does it matter? We are angered, and appropriately so.
Trust and Design
My 10-inch Wusthof chef knife. I could go on about the feel and aesthetic beauty, but upon further introspection I think my emotional attachment is substantially based on trust that comes from experience.
I know that my knife is up to whatever task I use it for. It is not going to slip out of my hand, the blade it not going to snap or break no matter how much pressure I apply; it is sharp enough to cut bones; it is not going to mutilate the meal I am about to serve to guests. I hate cooking in other people's kitchens and using their cutlery, even if it is good quality stuff.
This is a durable good, meaning I will only need to buy chef knives
once or twice in a lifetime. I liked it OK when I purchased it, but my emotional attachment to it has developed over time through literally hundreds of consecutive positive experiences. This object is my friend.
The response above, one of many I received offering examples of products that people have learned to love or hate, vividly demonstrates the importance and power and properties of trust. Trust implies several qualities: reliance, confidence, and integrity. It means that one can count on a trusted system to perform precisely according to expectation. It implies integrity and, in a person, character. In artificial devices, trust means having it perform reliably, time after time after time. But there is more. In particular, we have high expectations of systems we trust: we expect them “to perform precisely according to expectation,” which, of course, implies that we have built up particular expectations. These expectations come from multiple sources: the advertisements and recommendations that led us to buy the item in the first place; the reliability with which it has been performing since we got it; and, perhaps most important of all, the conceptual model we have of the item.
Your conceptual model of a product or service—and the feedback that you receive—is essential in establishing and maintaining trust. As I discussed in chapter 3, a conceptual model is your understanding of what an item is and how it works. If you have a good, accurate conceptual model, especially if the item keeps you informed about what it is doing—what stage in the operations it has reached, and whether things are proceeding smoothly or not—then you are not surprised by the result.
Consider what happens when your car runs out of gasoline. Whose fault is it? It depends. Most people's conceptual model of a car includes a fuel gauge that says what percentage of the tank is filled with gasoline. Many people also expect a warning such as a flashing light when the tank is close to empty. Some people even rely upon their assumption that the gas gauge is conservative, indicating that the tank is emptier than it really is, giving some leeway.
Suppose that the gas gauge has been reading close to empty, the warning light has flashed, but you procrastinate, not wanting to take the time to refill. If you run out of gasoline, you will blame yourself. Not only will you not be upset at the car, you might even now trust it more than ever. After all, it indicated you were going to run out of fuel, and you did. What if the warning light never came on? In that case you would blame the car. What if the gas gauge fluctuated up and down, continually varying? Then you wouldn't know how to interpret it: you wouldn't trust it.
Do you trust the gas gauge of your car? Most people are wary at first. When they drive in a new car, they have to do some tests to discover how much to trust the gas gauge. The typical way is to drive the car to lower and lower fuel estimates before refilling. The true test, of course, would be to run out of fuel deliberately in order to see how that corresponded to the meter reading, but most people don't need that much reassurance. Rather, they do enough driving to determine how much to trust the indicator, whether it be the meter reading or the lowfuel warning light in some cars, or for those with trip computers, the miles of driving the computer predicts can be done with the remaining fuel. With sufficient experience, people learn how to interpret the readings and, thus, how much to trust the gauge. Trust has to be earned.

Other books

Separation by J.S. Frankel
Never Cry Mercy by L. T. Ryan
The Dating Tutor by Frost, Melissa
Maigret in New York by Georges Simenon
The Burning City by Megan Morgan
Can Love Happen Twice? by Ravinder Singh