The Design of Future Things (9 page)

BOOK: The Design of Future Things
5.56Mb size Format: txt, pdf, ePub

Because sound can be both informative and annoying, this raises the difficult design problem of understanding how to enhance its value while minimizing its annoyance. In some cases, this can be done by trying to minimize distasteful sounds, lowering their intensity, minimizing the use of rapid transients, and trying to create a pleasant ambience. Subtle variations in this background ambience might yield effective communication.
One designer, Richard Sapper, created a kettle whose whistle produced a pleasant musical chord: the musical notes E and B. Note that even annoyance has its virtues: emergency signals, such as those of ambulances, fire trucks, and alarms for fire, smoke, or other potential disasters, are deliberately loud and annoying, the better to attract attention.

Sound should still be used where it appears to be a natural outgrowth of the interaction, but arbitrary, meaningless sounds are almost always annoying. Because sound, even when cleverly used, can be so irritating, in many cases its use should be avoided. Sound is not the only alternative: sight and touch provide alternative modalities.

Mechanical knobs can contain tactile cues, a kind of implicit communication, for their preferred settings. For example, in some rotating tone controls you can feel a little “blip” as you rotate it past the preferred, neutral position. The controls in some showers will not go above a preset temperature unless the user manipulates a button that enables higher temperatures. The “blip” in the tone control allows someone to set it to the neutral position rapidly and efficiently. The stop in the shower serves as a warning that higher temperatures might be uncomfortable, or even dangerous, for some people. Some commercial airplanes use a similar stop on their throttles: when the throttles are pushed forward, they stop at the point where higher throttle setting might damage the engines. In an emergency, however, if the pilot believes it is necessary to go beyond in order to avoid a crash, the pilot can force the throttle beyond the first stopping point. In such a case, damage to the engine is clearly of secondary importance.

Physical marks provide another possible direction. When we read paper books and magazines, we may leave marks of our progress, whether through normal wear and tear or by deliberate folding of pages, insertion of sticky notes, highlighting, underlining, and margin notes. In electronic documents, all of these cues don't have to be lost. After all, the computer knows what has been read, what pages have been scrolled to, which sections have been read. Why not make wear marks on the software, letting the reader discover which sections have been edited, commented upon, or read the most? The research team of Will Hill, Jim Hollan, Dave Wroblewski, and Tim McCandless have done just that, adding marks on electronic documents so that viewers can find which sections have been looked at the most. Dirt and wear have their virtues as natural indicators of use, relevance, and importance. Electronic documents can borrow these virtues without the deficits of dirt, squalor, and damage to the material. Implicit interaction is an interesting way to develop intelligent systems. No language, no forcing: simple clues in both directions indicate recommended courses of action.

Implicit communication can be a powerful tool for informing without annoying. Another important direction is to exploit the power of affordances, the subject of the next section.

 

Affordances as Communication

It started off with an e-mail: Clarisse de Souza, a professor of informatics in Rio de Janeiro wrote to disagree with my definition of “affordance.” “Affordance,” she told me, “is really communication between the designer and the user of a product.” “No,” I wrote
back. “An affordance is simply a relationship that exists in the world: it is simply there. Nothing to do with communication.”

I was wrong. She was not only right, but she got me to spend a delightful week in Brazil, convincing me, then went on to expand upon her idea in an important book,
Semiotic Engineering.
I ended up a believer: “Once designs are thought of as shared communication and technologies as media, the entire design philosophy changes radically, but in a positive and constructive way,” is what I wrote about the book for its back cover.

To understand this discussion, let me back up a bit and explain the original concept of an affordance and how it became part of the vocabulary of design. Let me start with a simple question: how do we function in the world? As I was writing
The Design of Everyday Things
, I pondered this question: when we encounter something new, most of the time we use it just fine, not even noticing that it is a unique experience. How do we do this? We encounter tens of thousands of different objects throughout our lives, yet in most cases, we know just what to do with them, without instruction, without any hesitation. When faced with a need, we are often capable of designing quite novel solutions; “hacks” they are sometimes called: folded paper under a table leg to stabilize the table, newspapers pasted over a window to block the sun. Years ago, as I pondered this question, I realized that the answer had to do with a form of implicit communication, a form of communication that today we call “affordances.”

The term
affordance
was invented by the great perceptual psychologist J. J. Gibson to explain our perceptions of the world. Gibson defined affordances as the range of activities that
an animal or person can perform upon an object in the world. Thus, a chair affords sitting and supporting for an adult human, but not for a young child, an ant, or an elephant. Affordances are not fixed properties: they are relationships that hold between objects and agents. Moreover, to Gibson, affordances existed whether they were obvious or not, visible or not, or even whether or not anyone had ever discovered it. Whether or not you knew about it was irrelevant.

I took Gibson's term and showed how it can be applied to the practical problems of design. Although Gibson didn't think they needed to be visible, to me, the critical thing was their visibility. If you didn't know that an affordance existed, I argued, then it was worthless, at least in the moment. In other words, the ability of a person to discover and make use of affordances is one of the important ways that people function so well, even in novel situations when encountering novel objects.

Providing effective, perceivable affordances is important in the design of today's things, whether they be coffee cups, toasters, or websites, but these attributes are even more important for the design of future things. When devices are automatic, autonomous, and intelligent, we need perceivable affordances to show us how we might interact with them and, equally importantly, how they might interact with the world. We need affordances that communicate: hence the importance of de Souza's discussion with me and of her semiotic approach to affordances.

The power of visual, perceivable affordances is that they guide behavior, and in the best of cases, they do so without the
person's awareness of the guidance—it just feels natural. This is how we can interact so well with most of the objects around us. They are passive and responsive: they sit there quietly, awaiting our activity. In the case of appliances, such as a television set, we push a button, and the television set changes channels. We walk, turn, push, press, lift, and pull, and something happens. In all these cases, the design challenge is to let us know beforehand what range of operations is possible, what operation we need to perform, and how we go about doing it. During the carrying out of the action, we want to know how it is progressing. Afterward, we want to know what change took place.

This description pretty much describes all the designed objects with which we interact today, from household appliances to office tools, from computers to older automobiles, from websites and computer applications to complex mechanical devices. The design challenges are large and not always carried out successfully, hence our frustrations with so many everyday objects.

Communicating with Autonomous, Intelligent Devices

The objects of the future will pose problems that cannot be solved simply by making the affordances visible. Autonomous, intelligent machines pose particular challenges, in part because the communication has to go both ways, from person to machine and from machine to person. How do we communicate back and forth with these machines? To answer this question,
let's look at the wide range of machine+person coupling—an automobile, bicycle, or even a horse—and examine how that machine+person entity communicates with another machine+person entity.

In
chapter 1
, I described my discovery that my description of the symbiotic coupling of horse and rider was a topic of active research by scientists at the National Aeronautics and Space Administration's (NASA) Langley Research Center in Virginia and the Institut für Verkehrsführung und Fahr in Braunschweig, Germany. Their goal, like mine, is to enhance human-machine interaction.

When I visited Braunschweig to learn about their research, I also learned more about how to ride a horse. A critically important aspect of both horseback riding and of a driver's controlling a horse and carriage, Frank Flemisch, the director of the German group explained to me, is the distinction between “loose-rein” and “tight-rein” control. Under tight reins, the rider controls the horse directly, with the tightness communicating this intention to the horse. In loose-rein riding, the horse has more autonomy, allowing the rider to perform other activities or even to sleep. Loose and tight are the extremes on a continuum of control, with various intermediate stages. Moreover, even in tight-rein control, where the rider is in control, the horse can balk or otherwise resist the commands. Similarly, in loose-rein control, the person can still provide some oversight using the reins, verbal commands, pressure from the thighs and legs, and heel kicks.

An even closer analog of the interaction between horse and driver is that of a wagon or carriage, as in
Figure 3.2
. Here, the
driver is not as tightly coupled to the horse as the rider who sits on its back, so this is more similar to the average, nonprofessional driver and a modern automobile. The coupling between horse and driver on the wagon, or driver and automobile, is restricted. Even here, though, the continuum between “tight-rein” and “loose-rein” control still applies. Note how the degree of animal autonomy or of human control is communicated by exploiting the implicit communication made possible through the affordances of the reins. Combining implicit communication with affordances is a powerful, very natural concept. This aspect of working with a horse is the critical component that can be borrowed in the design of machine+human systems—in designing the system so that the amount of independence and interaction can vary in a natural manner, capitalizing upon the affordances of the controller and the communicative capabilities it provides.

F
IGURE
3.2

Loose-rein guidance of a horse and carriage.
With an intelligent horse providing the power and guidance, the driver can relax and not even pay much attention to the driving. This is loose-rein control, where the horse has taken over.

Photograph by the author in Brugge, Belgium.

When I drove the automobile simulator at Braunschweig, the difference between “loose-” and “tight-rein” control was apparent. Under tight-rein conditions, I did most of the work, determining the force on the accelerator, brake, and steering wheel, but the car nudged me, this way or that, trying to keep me on a steady course within the highway's lane boundaries. If I got too close to the car ahead of me, the steering wheel pushed back, indicating that I should back off. Similarly, if I lagged behind too much, the steering wheel moved forward, urging me to speed up a bit. Under loose-rein conditions, the car was more aggressive in its actions, so much so that I hardly had to do anything at all. I had the impression that I could close my eyes and simply let the car guide me through the driving. Unfortunately, during the limited time available for my visit, I wasn't able to try everything I now realize I should have. The one thing missing from the demonstration was a way for the driver to select how much control to give to the system. This transition in amount of control is important, for when an emergency arises, it may be necessary to transfer the control very rapidly, without distracting from the attention required to deal with the situation.

The horse+rider conceptualization provides a powerful metaphor for the development of machine+human interfaces, but the metaphor alone is not enough. We need to learn more about these interfaces, and it is reassuring to see that research has already begun, with scientists studying how a person's intentions might best be communicated to the system, and vice versa.

Other books

The Rock Jockeys by Gary Paulsen
Venus Over Lannery by Martin Armstrong
East End Trouble by Dani Oakley, D.S. Butler
The Code Book by Simon Singh
Divine by Karen Kingsbury
Fatal Decree by Griffin, H. Terrell