Emotional Design (29 page)

Read Emotional Design Online

Authors: Donald A. Norman

BOOK: Emotional Design
13.57Mb size Format: txt, pdf, ePub
Although the automatic application of brakes in an automobile is a partial implementation of the second law, the correct implementation would have the auto examine the roadway ahead and decide for itself just how much speed, braking, or steering ought to be applied. Once that happens, we will indeed have a full first and second law implementation. Once again, this is starting to happen. Some cars automatically slow up if they're too close to the car in front, even if the driver has not acted to slow the vehicle.
We don't yet have the case of conflicting orders, but soon we will have interacting robots, where the requests of one robot might conflict with the requests of the human supervisors. Then, determining precedence and priority will become important.
Once again, these are easy cases. Asimov had in mind situations where a car would refuse to drive: “I'm sorry, but the road conditions are too dangerous tonight.” We haven't yet reached that point—but we will. Asimov's second law will be useful.
Least important of all the laws, so Asimov thought, was self-preservation—“a robot must protect its own existence as long as such protection does not conflict with the Zeroth, First, or Second Law”—so it is numbered three, last in the series. Of course, given the limited capability of today's machines, where laws one and two seldom apply, this law is of most importance today, for we would be most annoyed if our expensive robot damaged or destroyed itself. As a result, this law is easy to find in action within many existing machines. Remember those sensors that are built into robot vacuum cleaners to prevent them from falling down stairs? Also how they—and robot lawn mowers—have
bump and obstacle detectors to avoid damage from collisions? In addition, many robots monitor their energy state and either go into “sleep” mode or return to a charging station when their energy level drops. Resolution of conflicts with the other laws is not well handled, except by the presence of human operators who are able to override safety parameters when circumstances warrant.
Asimov's Laws cannot be fully implemented until machines have a powerful and effective capability for reflection, including meta-knowledge (knowledge of its own knowledge) and self-awareness of its state, activities, and intentions. These raise deep issues of philosophy and science as well as complex implementation problems for engineers and programmers. Progress in this area is happening, but slowly.
Even with today's rather primitive devices, having some of the capabilities would be useful. Thus, in cases of conflict, there would be sensible overriding of the commands. Automatic controls in airplanes would look ahead to determine the implications of the path they are following so that they would change if it would lead to danger. Some planes have indeed flown into mountains while on automatic control, so the capability would have saved lives. In actuality, many automated systems already are beginning to do this kind of checking.
Even today's toy pet robots have some self-awareness. Consider a robot whose operation is controlled both by its “desire” to play with its human owner, but also to make sure that it doesn't exhaust its battery power. When low on energy, it will therefore return to its charging station, even if the human wishes to continue playing with it.
The greatest hurdles to our ability to implement something akin to Asimov's Laws are his underlying assumptions of autonomous operation and central control mechanisms that may not apply in today's systems.
Asimov's robots worked as individuals. Give a robot a task to do, and off it would go. In the few cases where he had robots work as a group, one robot was always in charge. Moreover, he never had people and robots working together as a team. We are more likely to want cooperative robots, systems in which people and robots or teams of
robots work together, much as a group of human workers can work together at a task. Cooperative behavior requires a different set of assumptions than Asimov had. Thus, cooperative robots need rules that provide for full communication of intentions, current state, and progress.
Asimov's main failure, however, was his assumption that someone had to be in control. When he wrote his novels, it was common to assume that intelligence required a centralized coordinating and control mechanism with a hierarchical organizational structure beneath it. This is how armies have been organized for thousands of years: armies, governments, corporations, and other organizations. It was natural to assume that the same principle applied to all intelligent systems. But this is not the way of nature. Many natural systems, from the actions of ants and bees, to the flocking of birds, and even the growth of cities and the structure of the stock market, occur as a natural result of the interaction of multiple bodies, not through some central, coordinated control structure. Modern control theory has moved away from this assumption of a central command post. Distributed control is the hallmark of today's systems. Asimov assumed a central decision structure for each robot that decided how to act, guided by his laws. In fact, that is probably not how it will work: the laws will be part of the robot 's architecture, distributed throughout the many modules of its mechanisms; lawful behavior will emerge from the interactions of the multiple modules. This is a modern concept, not understood while Asimov was writing, so it is no wonder he missed this development in our understanding of complex systems.
Still, Asimov was ahead of his time, thinking far ahead to the future. His stories were written in the 1940s and '50s, but in his novel
I, Robot
, he quotes the three laws of robotics from the 2058 edition of the
Handbook of Robotics
; thus, he looked ahead more than 100 years. By 2058, we may indeed need his laws. Moreover, as the analyses indicate, the laws are indeed relevant, and many systems today follow them, even if inadvertently. The difficult aspects have to do with damage due to lack of action, as well as with properly assessing the relative
importance of following orders versus damage or harm to oneself, others, or humanity.
As machines become more capable, as they take over more and more human activities, working autonomously, without direct supervision, they will get entangled in the legal system, which will try to determine fault when accidents arise. Before this happens, it would be useful to have some sort of ethical procedure in place. There already are some safety regulations that apply to robots, but they are very primitive. We will need more.
It is not too early to think about the future difficulties that intelligent and emotional machines may give rise to. There are numerous practical, moral, legal, and ethical issues to think about. Most are still far in the future, but that is a good reason to start now—so that when problems arrive, we will be ready.
The Future of Emotional Machines and Robots: Implications and Ethical Issues
The development of smart machines that will take over some tasks now done by people has important ethical and moral implications. This point becomes especially critical when we talk about humanoid robots that have emotions and to which people might form strong emotional attachments.
What is the role of emotional robots? How will they interact with us? Do we really want machines that are autonomous, self-directed, with a wide range of behavior, a powerful intelligence, and affect and emotion? I think we do, for they can provide many benefits. Obviously, as with all technologies, there are dangers as well. We need to ensure that people always maintain oversight and control, that they serve human needs appropriately.
Will robot teachers replace human teachers? No, but they can complement them. Moreover, they could be sufficient in situations where there is no alternative—to enable learning while traveling, or while in
remote locations, or when one wishes to study a topic for which there is not easy access to teachers. Robot teachers will help make lifelong learning a practicality. They can make it possible to learn no matter where one is in the world, no matter the time of day. Learning should take place when it is needed, when the learner is interested, not according to some arbitrary, fixed school schedule.
Many are bothered by these possibilities, so much so that they reject them out of hand as unethical, immoral. Although I do not do so, I do sympathize with their concerns. However, I see the development of intelligent machines as both inevitable and beneficial. Where will there be benefits? In such areas as doing dangerous tasks, driving automobiles, piloting commercial vessels, in education, in medicine, and in taking over routine work. Where might there be moral and ethical concerns? Pretty much in the same list of activities. Let me explore the beneficial aspects in more detail.
Consider some of the benefits. Robots could be—and to some extent already are—used in dangerous tasks, where people's lives are at risk. This includes such things as search-and-rescue operations, exploration, and mining. What are the problems? The major ones are likely to come from the use of robots to enhance illegal or unethical activities: robbery, murder, and terrorism.
Will robot cars replace the need for human drivers? I hope so. Every year, tens of thousands of people are killed, and hundreds of thousands seriously injured through motor vehicle accidents. Wouldn't it be nice if automobiles were as safe as commercial aviation? Here is where automated vehicles can be a wonderful saving. Moreover, automated vehicles could drive more closely to one another, helping to reduce traffic congestion, and they could drive more efficiently, helping to solve some of the energy issues associated with driving.
Driving an automobile is deceptively simple: most of the time it takes little skill. As a result, many are lulled into a false sense of security and self-confidence. But when danger arises, it does so rapidly, and then the distracted, the semiskilled, the untrained, and those temporarily
impaired by drugs, alcohol, illness, fatigue, or sleep deprivation are often incapable of reacting properly in time. Even well-trained commercial drivers have accidents: automated vehicles will not reduce all accidents and injuries, but they stand a good chance of dramatically reducing the present toll. Yes, some people truly enjoy the sport of driving, but these could be accommodated on special roads, recreational areas, and race tracks. Automation of everyday driving would lead to loss of jobs for drivers of commercial vehicles, but with a saving of life, overall.
Robot tutors have great potential for changing the way we teach. Today's model is far too often that of a pedant lecturing at the front of the classroom, forcing students to listen to material they have no interest in, that appears irrelevant to their daily lives. Lectures and textbooks are the easiest way to teach from the point of view of the teacher, but the least effective for the learner. The most powerful learning takes place when well-motivated students get excited by a topic and then struggle with the concepts, learning how to apply them to issues they care about. Yes, struggle: learning is an active, dynamic process, and struggle is a part of it. But when students care about something the struggle is enjoyable. This is how great teaching has always taken place—not through lecturing, but through apprenticeship, coaching, and mentoring. This is how athletes learn. This is the essence of the attraction of video games, except that in games, what students learn is of little practical value. These methods are well known in the learning sciences, where they are called problem-based, inquiry-learning, or constructivist.
Here is where emotion plays its part. Students learn best when motivated, when they care. They need to be emotionally involved, to be drawn to the excitement of the topic. This is why examples, diagrams and illustrations, videos and animated illustrations are so powerful. Learning need not be a dull and dreary exercise, not even learning about what are normally considered dull and dreary topics: every topic can be made exciting, every topic excites the emotions of someone, so why not excite everyone? It is time for lessons to become
alive, for history to be seen as a human struggle, for students to understand and appreciate the structure of art, music, science, and mathematics. How can these topics be made exciting? By making them relevant to the lives of each individual student. This is often most effective by having students put their skills to immediate application. Developing exciting, emotionally engaging, and intellectually effective learning experiences is truly a design challenge worthy of the best talent in the world.
Robots, machines, or computers can be of great assistance in instruction by providing the framework for motivated, problem-based learning. Computer learning systems can provide simulated worlds in which students can explore problems in science, literature, history, or the arts. Robot teachers can make it easy to search the world's libraries and knowledge bases. Human teachers will no longer have to lecture, but instead can spend their time as coaches and mentors, helping to teach not only the topic, but also how best to learn, so that the students will maintain their curiosity through life, as well as the ability to teach themselves when necessary. Human teachers are still essential, but they can play a different, much more supportive and constructive role than they do today.
Moreover, although I believe strongly that we could develop efficient robot tutors, perhaps as effective as Stephenson's
The Young Lady's Illustrated Primer
(see page 171), we would not have to abandon human teachers: automated tutors—whether books, machines, or robots—should act as supplements to human instruction. Even Stephenson writes in his novel that his star pupil knew nothing of the real world and of real people because she had spent far too much time locked up in the fantasy world of the Primer.
Robots in medicine? Yes, they could be used in all its aspects. In medicine, however, as in many other activities, I foresee this as a partnership, where well-trained human medical personnel work with specialized robotic assistants to increase the quality and reliability of care.

Other books

"But I Digress ..." by Darrel Bristow-Bovey
Taino by Jose Barreiro
Stone Cold by David Baldacci
Blue Moon by Isobel Bird
Summer Secrets by Sarah Webb
The Angel's Assassin by Holt, Samantha
Fatal Flaw by Marie Force
Kill or Die by William W. Johnstone