Read What Technology Wants Online

Authors: Kevin Kelly

What Technology Wants (36 page)

BOOK: What Technology Wants
6.92Mb size Format: txt, pdf, ePub
ads
With few exceptions technologies don't know what they want to be when they grow up. An invention requires many encounters with early adopters and collisions with other inventions to refine its role in the technium. Like people, young technologies often experience failure in their first careers before they find a better livelihood later. It's a rare technology that remains in its original role right from the start. More commonly a new invention is peddled by its inventors for one expected (and lucrative!) use, which is quickly proven wrong, and then advertised for a series of alternative (and less lucrative) uses, few of which work, until reality steers the technology toward a marginal unexpected use. Sometimes that marginal use blossoms into an exceptionally disruptive case that becomes the norm. When that kind of success happens, it obscures the earlier failures.
One year after Edison constructed the first phonograph, he was still trying to figure out what his invention might be used for. Edison knew more about this invention than anyone, but his speculations were all over the map. He thought his idea might birth dictation machines or audiobooks for the blind or talking clocks or music boxes or spelling lessons or recording devices for dying words or answering machines. In a list he drew up of possible uses for the phonograph, Edison added at the end, almost as an afterthought, the idea of playing recorded music.
Lasers were developed to industrial strength to shoot missiles down, but they are made in the billions primarily to read bar codes and movie DVDs. Transistors were created to replace vacuum tubes in room-sized computers, but most transistors manufactured today fill the tiny brains in cameras, phones, and communication equipment. Mobile phones began as . . . well, mobile phones. And for the first few decades that's what they were. But in its maturity, cell-phone technology is becoming a mobile computing platform for tablets, e-books, and video players. Switching occupations is the norm for technology.
The greater the number of ideas and technologies already in the world, the more possible combinations and secondary reactions there will be when we introduce a new one. Forecasting consequences in a technium where millions of new ideas are introduced each year becomes mathematically intractable.
We make prediction more difficult because our immediate tendency is to imagine the new thing doing an old job better. That's why the first cars were called “horseless carriages.” The first movies were simply straightforward documentary films of theatrical plays. It took a while to realize the full dimensions of cinema photography as its own new medium that could achieve new things, reveal new perspectives, do new jobs. We are stuck in the same blindness. We imagine e-books today as being regular books that appear on electronic paper instead of as radically powerful threads of text woven into the one shared universal library. We think genetic testing is like blood testing, something you do once in your life to get an unchanging score, when sequencing our genes may instead become something we do hourly as our genes mutate, shift, and interact with our environment.
The predictivity of most new things is very low. The Chinese inventor of gunpowder most likely did not foresee the gun. William Sturgeon, the discoverer of electromagnetism, did not predict electric motors. Philo Farnsworth did not imagine the television culture that would burst forth from his cathode-ray tube. Advertisements at the beginning of the last century tried to sell hesitant consumers the newfangled telephone by stressing ways it could send messages, such as invitations, store orders, or confirmation of their safe arrival. The advertisers pitched the telephone as if it were a more convenient telegraph. None of them suggested having a conversation.
The automobile today, embedded in its matrix of superhighways, drive-through restaurants, seat belts, navigation tools, and digital hypermiling dashboards, is a different technology from the Ford Model T of 100 years ago. And most of those differences are due to secondary inventions rather than the enduring internal combustion engine. In the same way, aspirin today is not the aspirin of yesteryear. Put into the context of other drugs in the body, changes in our longevity and pill-popping habits (one per day!), cheapness, etc., it is a different technology from either the folk medicines derived from the essence of willow bark or the first synthesized version brought out by Bayer 100 years ago, even though they are all the same chemical, acetylsalicylic acid. Technologies shift as they thrive. They are remade as they are used. They unleash second- and third-order consequences as they disseminate. And almost always, they bring completely unpredicted effects as they near ubiquity.
On the other hand, most initial grand ideas for a technology fade into obscurity. An unfortunate few become an immense problem—a greatness wholly different from what their inventors intended. Thalidomide was a great idea for pregnant women but a horror for their unborn children. Internal combustion engines are great for mobility but awful for breathing. Freon kept things cold cheaply but took out the protective UV filter around the planet. In some cases this change in effect is a mere unintended side effect; in many cases it is a wholesale change of career.
If we examine technologies honestly, each one has its faults as well as its virtues. There are no technologies without vices and none that are neutral. The consequences of a technology expand with its disruptive nature. Powerful technologies will be powerful in both directions—for good and bad. There is no powerfully constructive technology that is not also powerfully destructive in another direction, just as there is no great idea that cannot be greatly perverted for great harm. After all, the most beautiful human mind is still capable of murderous ideas. Indeed, an invention or idea is not really tremendous unless it can be tremendously abused. This should be the first law of technological expectation: The greater the promise of a new technology, the greater its potential for harm as well. That's also true for new beloved technologies such the internet search engine, hypertext, and the web. These immensely powerful inventions have unleashed a level of creativity not seen since the Renaissance, but when (not if) they are abused, their ability to track and anticipate individual behavior will be awful. If a new technology is likely to birth a never-before-seen benefit, it will also likely birth a never-before-seen problem.
The obvious remedy for this dilemma is to expect the worst. That's the result of a commonly used approach to new technologies called the Precautionary Principle.
The Precautionary Principle was first crafted at the 1992 Earth Summit as part of the Rio Declaration. In its original form it advised that a “lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” In other words, even if you can't prove scientifically that harm is happening, this uncertainty should not prevent you from stopping the suspected harm. This principle of precaution has undergone many revisions and variations in the years since and has become more prohibitive over time. A recent version states: “Activities that present an uncertain potential for significant harm should be prohibited unless the proponent of the activity shows that it presents no appreciable risk of harm.”
One version or another of the Precautionary Principle informs legislation in the European Union (it is included in the Maastricht Treaty) and appears in the United Nations Framework Convention on Climate Change. The U.S. Environmental Protection Agency (EPA) and Clean Air Act rely on the approach in establishing pollution control levels. The principle is also written into parts of the municipal codes of green cities such as Portland, Oregon, and San Francisco. It is a favorite standard for bioethicists and critics of rapid technological adoption.
All versions of the Precautionary Principle hold this axiom in common: A technology must be shown to do no harm before it is embraced. It must be proven to be safe before it is disseminated. If it cannot be proven safe, it should be prohibited, curtailed, modified, junked, or ignored. In other words, the first response to a new idea should be inaction until its safety is established. When an innovation appears, we should pause. Only after a new technology has been deemed okay by the certainty of science should we try to live with it.
On the surface, this approach seems reasonable and prudent. Harm must be anticipated and preempted. Better safe than sorry. Unfortunately, the Precautionary Principle works better in theory than in practice. “The precautionary principle is very, very good for one thing—stopping technological progress,” says philosopher and consultant Max More. Cass R. Sunstein, who devoted a book to debunking the principle, says, “We must challenge the Precautionary Principle not because it leads in bad directions, but because read for all it is worth, it leads in no direction at all.”
Every good produces harm somewhere, so by the strict logic of an absolute Precautionary Principle no technologies would be permitted. Even a more liberal version would not permit new technologies in a timely manner. Whatever the theory, as a practical matter we are unable to address all risks, independent of their low probability, while efforts to address all improbable risks hinders more likely potential benefits.
For example, malaria infects 300 million to 500 million people worldwide, causing 2 million deaths per year. It is debilitating to those who don't die and leads to cyclic poverty. But in the 1950s the level of malaria was reduced by 70 percent by spraying the insecticide DDT around the insides of homes. DDT was so successful as an insecticide that farmers eagerly sprayed it by the tons on cotton fields—and the molecule's by-products made their way into the water cycle and eventually into fat cells in animals. Biologists blamed it for a drop in reproduction rates for some predatory birds, as well as local die-offs in some fish and aquatic life species. Its use and manufacture were banned in the United States in 1972. Other countries followed suit. Without DDT spraying, however, malaria cases in Asia and Africa began to rise again to deadly pre-1950s levels. Plans to reintroduce programs for household spraying in malarial Africa were blocked by the World Bank and other aid agencies, who refused to fund them. A treaty signed in 1991 by 91 countries and the EU agreed to phase out DDT altogether. They were relying on the precautionary principle: DDT was probably bad; better safe than sorry. In fact DDT had never been shown to hurt humans, and the environmental harm from the miniscule amounts of DDT applied in homes had not been measured. But nobody could prove it did not cause harm, despite its proven ability to do good.
When it comes to risk aversion, we are not rational. We select which risks we want to contend. We may focus on the risks of flying but not driving. We may react to the small risks of dental X-rays but not to the large risk of undetected cavities. We might respond to the risks of vaccination but not the risks of an epidemic. We may obsess about the risks of pesticides but not the risks of organic foods.
Psychologists have learned a fair amount about risk. We now know that people will accept a thousand times as much risk for technologies or situations that are voluntary rather than mandatory. You don't have a choice where you get your tap water, so you are less tolerant in regard to its safety than you might be from using a cell phone of your choice. We also know that acceptance of a technology's risk is proportional to its corresponding perceived benefits. More gain is worth more risk. And, finally, we know that the acceptability of risk is directly influenced by how easy it is to imagine both the worst case and the best benefits, and that these are determined by education, advertising, rumor, and imagination. The risks that the public thinks are most significant are those in which it is easy to think of examples where the risk comes to fruition in a worst-case scenario. If it can plausibly lead to death, it's “significant.”
In a letter Orville Wright wrote to his inventor friend Henry Ford, Wright recounts a story he heard from a missionary stationed in China. Wright told Ford the story for the same reason I tell it here: as a cautionary tale about speculative risks. The missionary wanted to improve the laborious way the Chinese peasants in his province harvested grain. The local farmers clipped the stalks with some kind of small hand shear. So the missionary had a scythe shipped in from America and demonstrated its superior productivity to an enthralled crowd. “The next morning, however, a delegation came to see the missionary. The scythe must be destroyed at once. What, they said, if it should fall into the hands of thieves; a whole field could be cut and carried away in a single night.” And so the scythe was banished, progress stopped, because nonusers could imagine a possible—but wholly improbable—way it could significantly harm their society. (Much of the hugely disruptive theater around “national security” today is based on similarly improbable scenarios of worst-case dangers.)
In its efforts to be “safe rather than sorry,” precaution becomes myopic. It tends to maximize only one value: safety. Safety trumps innovation. The safest thing to do is to perfect what works and never try anything that could fail, because failure is inherently unsafe. An innovative medical procedure will not be as safe as the proven standard. Innovation is not prudent. Yet because precaution privileges only safety, it not only diminishes other values but also actually reduces safety.
Big accidents in the technium usually don't start out as wings falling off or massive pipeline breaks. One of the largest shipping disasters in modern times began with a burning coffeepot in the crew kitchen. A regional electric grid can shut down not because a tower is toppled but because a gasket breaks in a minor pump. In cyberspace a rare, trivial bug on a web-page order form can take a whole site down. In each case the minor error triggers, or combines with, other unforeseen consequences in the system, also minor. But because of the tight interdependence of parts, minor glitches in the right improbable sequence cascade until the trouble becomes an unstoppable wave and reaches catastrophic proportions. Sociologist Charles Perrow calls these “normal accidents” because they “naturally” emerge from the dynamics of large systems. The system is to blame, not the operators. Perrow did an exhaustive minute-by-minute study of 50 large-scale technological accidents (such as Three Mile Island, the Bhopal disaster, Apollo 13,
Exxon Valdez,
Y2K, etc.) and concluded, “We have produced designs so complicated that we cannot anticipate all the possible interactions of the inevitable failures; we add safety devices that are deceived or avoided or defeated by hidden paths in the systems.” In fact, Perrow concludes, safety devices and safety procedures themselves often create new accidents. Safety components can become one more opportunity for things to go wrong. For instance, adding security forces at an airport can increase the number of people with access to critical areas, which is a decrease in security. Redundant systems, normally a safety backup, can easily breed new types of errors.
BOOK: What Technology Wants
6.92Mb size Format: txt, pdf, ePub
ads

Other books

Midnight Dolphin by James Carmody
One Step Closer to You by Alice Peterson
She's Not There by P. J. Parrish
Snatched by Pete Hautman
Death at the Bar by Ngaio Marsh
Siren Rock by Keck, Laurie
Among the Living by Timothy Long
You Disappear: A Novel by Jungersen, Christian
The Witches of Eileanan by Kate Forsyth