Read The Design of Future Things Online
Authors: Don Norman
The scientific community calls this approach “automagical”: automatic plus magical. The manufacturer wants us to believe inâand trustâthe magic. Even when things work well, it is somewhat discomforting to have no idea of how or why. The real problems begin when things go wrong, for then we have no idea how to respond. We are in the horrors of the in-between world. On the one hand, we are far from the science fiction, movieland world populated by autonomous, intelligent robots that always work perfectly. On the other hand, we are moving rapidly away from the world of manual control, one with no automation, where people operate equipment and get the task done.
“We are just making your life easier,” the companies tell me, “healthier, safer, and more enjoyable. All those good things.” Yes, if the intelligent, automatic devices worked perfectly, we would be happy. If they really were completely reliable, we wouldn't have to know how they work: automagic would then be just fine. If we had manual control over a task with manual devices that we understood, we would be happy. When, however, we get stuck in the in-between world of automatic devices we don't understand and that don't work as expected, not doing the task we wish to have done, well, then our lives are not made easier, and certainly not more enjoyable.
The history of intelligent machines starts with early attempts to develop mechanical automatons, including clockworks and
chess-playing machines. The most successful early chess-playing automaton was Wolfgang von Kempelen's “Turk,” introduced with much fanfare and publicity to the royalty of Europe in 1769. In reality, it was a hoax, with an expert chess player cleverly concealed inside the mechanism, but the fact that the hoax succeeded so well indicates people's willingness to believe that mechanical devices could indeed be intelligent. The real growth in the development of smart machines didn't start until the mid 1900s with the development of control theory, servomechanisms and feedback, cybernetics, and information and automata theory. This occurred along with the rapid development of electronic circuits and computers, whose power has doubled roughly every two years. Because we've been doing this for more than forty years, today's circuits are one million times more powerful than those first, early “giant brains.” Think of what will happen in twenty years, when machines are a thousand times more powerful than they are todayâor in forty years, when they will be a million times more powerful.
The first attempts to develop a science of artificial intelligence (AI) also began in the mid 1900s. AI researchers moved the development of intelligent devices from the world of cold, hard, mathematical logic and decision making into the world of soft, ill-defined, human-centered reasoning that uses commonsense reasoning, fuzzy logic, probabilities, qualitative reasoning, and heuristics (“rules of thumb”) rather than precise algorithms. As a result, today's AI systems can see and recognize objects, understand some spoken and written language, speak, move about the environment, and do complex reasoning.
Perhaps the most successful use of AI today for everyday activities is in computer games, developing intelligent characters
who play against people, creating those intelligent, exasperating personalities in simulation games that seem to enjoy doing things to frustrate their creator, the game player. AI is also used successfully to catch bank and credit card fraud and other suspicious activities. Automobiles use AI for braking, stability control, lane keeping, automatic parking, and other features. In the home, simple AI controls the washing machines and driers, sensing the type of clothing and how dirty the load, adjusting things appropriately. In the microwave oven, AI can sense when food is cooked. Simple circuits in digital cameras and camcorders help control focus and exposure, including detecting faces, the better to track them even if they are moving and to adjust the exposure and focus to them appropriately. With time, the power and reliability of these AI circuits will increase, while their cost will decrease, so they will show up in a wide variety of devices, not just the most expensive ones. Remember, computer power has a thousandfold increase every twenty years, a million every forty.
Machine hardware is, of course, very different from that of animals. Machines are mostly made of parts with lots of straight lines, right angles, and arcs. There are motors and displays, control linkages and wires. Biology prefers flexibility: tissue, ligaments, and muscles. The brain works through massively parallel computational mechanisms, probably both chemical and electrical, and by settling into stable states. Machine brains, or, more accurately, machine information processing, operates much more quickly than biological neurons but also much less parallel in operation. Human brains are robust, reliable, and creative, marvelously adept at recognizing patterns. We humans tend
to be creative, imaginative, and very adaptable to changing circumstances. We find similarities among events, and we use metaphorical expansion of concepts to develop whole new realms of knowledge. Furthermore, human memory, although imprecise, finds relationships and similarities among items that machines would not think of as similar at all. And, finally, human common sense is fast and powerful, whereas machine common sense does not exist.
The evolution of technology is very different from the natural evolution of animals. With mechanical systems, the evolution is entirely up to the designer who analyzes existing systems and makes modifications. Machines have evolved over the centuries, in part because our understanding and ability to invent and develop technology has continually improved, in part because the sciences of the artificial have developed, and in part because human needs, and the environment itself, have changed.
There is, however, one interesting parallel between the evolution of humans and that of intelligent, autonomous machines. Both must function effectively, reliably, and safely in the real world. The world itself, therefore, imposes the same demands and requirements upon all creatures: animal, human, and artificial. Animals and people have evolved complex systems of perception and action, emotion and cognition. Machines need analogous systems. They need to perceive the world and act upon it. They need to think and make decisions, to solve problems and reason. And yes, they need something akin to the emotional processes of people. No, not the same emotions that people have but the machine equivalentsâthe better to survive the hazards and dangers of the world, take advantage of opportunities, anticipate
the consequences of their actions, and reflect upon what has happened and what is yet to come, thereby learning and improving performance. This is true for all autonomous, intelligent systems, animal, human, and machine.
F
IGURE
2.1
Car+driver: a new hybrid organism.
Rrrun
, a sculpture by Marta Thoma.
Photographed by the author from the Palo Alto, California, art
collection at the Palo Alto Bowden Park.
For years, researchers have shown that a three-level description of the brain is useful for many purposes, even if it is a radical simplification of its evolution, biology, and operation. These three-level descriptions have all built upon the early, pioneering description of the “triune” brain by Paul McLean, where the
three levels move up from lower structures of the brain (the brainstem) to higher ones (the cortex and frontal cortex), tracing both the evolutionary history and the power and sophistication of brain processing. In my book
Emotional Design
, I further simplified that analysis for use by designers and engineers. Think of the brain as having three levels of processing:
â¢
Visceral
:
The most basic, the processing at this level is automatic and subconscious, determined by our biological heritage.
â¢
Behavioral
:
This is the home of learned skills, but still mostly subconscious. This processing level initiates and controls much of our behavior. One important contribution is to manage expectations of the results of our actions.
â¢
Reflective
:
This is the conscious, self-aware part of the brain, the home of the self and one's self-image, which does analyses of our past and prospective fantasies that we hopeâor fearâmight happen.
Were we to build these emotional states into machines, they would provide the same benefits to machines as their states provide us: rapid responses to avoid danger and accident, safety features for both the machines and any people who might be near, and powerful learning cues to improve expectations and enhance performance. Some of this is already happening. Elevators quickly jerk back their doors when they detect an obstacle (usually a recalcitrant human) in their path. Robotic vacuum cleaners avoid sharp drop-offs: fear of falling is built into their
circuitry. These are visceral responses: the automatic fear responses prewired into humans through biology and prewired into machines by their designers. The reflective level of emotions places credit or blame upon our experiences. Machines are not yet up to this level of processing, but some day they will be, which will add even more power to their ability to learn and to predict.
The future of everyday things lies in products with knowledge, with intelligence, products that know where they are located and who their owners are and that can communicate with other products and the environment. The future of products is all about the capabilities of machines that are mobile, that can physically manipulate the environment, that are aware of both the other machines and the people around them and can communicate with them all.
By far the most exciting of our future technologies are those that enter into a symbiotic relationship with us: machine+person. Is the car+driver a symbiosis of human and machine in much the same way as the horse+rider might be? After all, the car+driver splits the processing levels, with the car taking over the visceral level and the driver the reflective level, both sharing the behavioral level in analogous fashion to the horse+rider.
Just as the horse is intelligent enough to take care of the visceral aspects of riding (avoiding dangerous terrain, adjusting its pace to the quality of the terrain, avoiding obstacles), so too is the modern automobile able to sense danger, controlling the car's stability, braking, and speed. Similarly, horses learn behaviorally complex routines for navigating difficult terrain or jumping obstacles, for changing canter when required and
maintaining appropriate distance and coordination with other horses or people. So, too, does the modern car behaviorally modify its speed, keep to its own lane, brake when it senses danger, and control other aspects of the driving experience.
F
IGURE
2.2
Horse+rider and car+driver as symbiotic systems.
A horse+rider can be treated as a symbiotic system, with the horse providing visceral-level guidance and the rider the reflective level, with both overlapping at the behavioral level. So, too, can a car+driver be thought of as a symbiotic system, with the car increasingly taking over the visceral level, the driver the reflective level. And, once again, with a lot of overlap at the behavioral level. Note that in both cases, the horse or the intelligent car also tries to exert control at the reflective level.
Â
Reflection is mostly left to the rider or driver, but not always, as when the horse decides to slow down or go home, or, not liking the interaction with its rider, decides to throw the rider off or just simply to ignore him or her. It is not difficult to imagine some future day when the car will decide which route to take and steer its way there or to pull off the road when it thinks it time to purchase gasoline or for its driver to eat a meal or take a breakâor, perhaps, when it has been enticed to do so by messages sent to it by the roadway and commercial establishments along the path.