Technopoly (14 page)

Read Technopoly Online

Authors: Neil Postman

BOOK: Technopoly
12.54Mb size Format: txt, pdf, ePub

Consider the case of cesarean sections. Close to one out of every four Americans is now born by C-section. Through modern technology, American doctors can deliver babies who would have died otherwise. As Dr. Laurence Horowitz notes in
Taking Charge of Your Medical Fate
, “… the proper goal of C-sections is to improve the chances of babies at risk, and that goal has been achieved.”
10
But C-sections are a surgical procedure, and when they are done routinely as an elective option, there is considerable and unnecessary danger; the chances of a woman’s dying during a C-section delivery are two to four times greater than during a normal vaginal delivery. In other words, C-sections can and do save the lives of babies at risk, but
when they are done for other reasons—for example, for the convenience of doctor or mother—they pose an unnecessary threat to health, and even life.

To take another example: a surgical procedure known as carotid endarterectomy is used to clean out clogged arteries, thus reducing the likelihood of stroke. In 1987, more than one hundred thousand Americans had this operation. It is now established that the risks involved in such surgery outweigh the risks of suffering a stroke. Horowitz again: “In other words, for certain categories of patients, the operation may actually kill more people than it saves.”
11
To take still another example: about seventy-eight thousand people every year get cancer from medical and dental X-rays. In a single generation, it is estimated, radiation will induce 2.34 million cancers.
12

Examples of this kind can be given with appalling ease. But in the interests of fairness the question about the value of technology in medicine is better phrased in the following way: Would American medicine be better were it not so totally reliant on the technological imperative? Here the answer is clearly, yes. We know, for example, from a Harvard Medical School study which focused on the year 1984 (no Orwellian reference intended), that in New York State alone there were thirty-six thousand cases of medical negligence, including seven thousand deaths related in some way to negligence. Although the study does not give figures on what kinds of negligence were found, the example is provided of doctors prescribing penicillin without asking the patients whether they were hypersensitive to the drug. We can assume that many of the deaths resulted not only from careless prescriptions and the doctors’ ignorance of their patients’ histories but also from unnecessary surgery. In other words, iatrogenics (treatment-induced illness) is now a major concern for the profession, and an even greater concern for the patient. Doctors themselves feel restricted and dominated by the requirement to use all available technology.
And patients may be justifiably worried by reports that quite possibly close to 40 percent of the operations performed in America are not necessary. In
Health Shock
, Martin Weitz cites the calculations of Professor John McKinlay that more deaths are caused by surgery each year in the United States than the annual number of deaths during the wars in Korea and Vietnam. As early as 1974, a Senate investigation into unnecessary surgery reported that American doctors had performed 2.4 million unnecessary operations, causing 11,900 deaths and costing about $3.9 billion.
13
We also know that, in spite of advanced technology (quite possibly because of it), the infant-survival rate in the United States ranks only fourteenth in the world, and it is no exaggeration to say that American hospitals are commonly regarded as among the most dangerous places in the nation. It is also well documented that, wherever doctor strikes have occurred, the mortality rate declines.

There are, one may be sure, very few doctors who are satisfied with technology’s stranglehold on medical practice. And there are far too many patients who have been its serious victims. What conclusions may we draw? First, technology is not a neutral element in the practice of medicine: doctors do not merely use technologies but are used by them. Second, technology creates its own imperatives and, at the same time, creates a wide-ranging social system to reinforce its imperatives. And third, technology changes the practice of medicine by redefining what doctors are, redirecting where they focus their attention, and reconceptualizing how they view their patients and illness.

Like some well-known diseases, the problems that have arisen as a result of the reign of technology came slowly and were barely perceptible at the start. As technology grew, so did the influence of drug companies and the manufacturers of medical instruments. As the training of doctors changed, so did the expectations of patients. As the increase in surgical procedures
multiplied, so did the diagnoses which made them seem necessary. Through it all, the question of what was being
undone
had a low priority if it was asked at all. The Zeitgeist of the age placed such a question in a range somewhere between peevishness and irrelevance. In a growing Technopoly, there is no time or inclination to speak of technological debits.

7
The Ideology of Machines:
Computer Technology

That American Technopoly has now embraced the computer in the same hurried and mindless way it embraced medical technology is undeniable, was perhaps inevitable, and is certainly most unfortunate. This is not to say that the computer is a blight on the symbolic landscape; only that, like medical technology, it has usurped powers and enforced mind-sets that a fully attentive culture might have wished to deny it. Thus, an examination of the ideas embedded in computer technology is worth attempting. Others, of course, have done this, especially Joseph Weizenbaum in his great and indispensable book
Computer Power and Human Reason
. Weizenbaum, however, ran into some difficulties, as everyone else has, because of the “universality” of computers, meaning (a) that their uses are infinitely various, and (b) that computers are commonly integrated into the structure of other machines. It is, therefore, hard to isolate specific ideas promoted by computer technology. The computer, for example, is quite unlike the stethoscope, which has a limited function in a limited context. Except for safecrackers, who, I am told, use stethoscopes to hear
the tumblers of locks click into place, stethoscopes are used only by doctors. But everyone uses or is used by computers, and for purposes that seem to know no boundaries.

Putting aside such well-known functions as electronic filing, spreadsheets, and word-processing, one can make a fascinating list of the innovative, even bizarre, uses of computers. I have before me a report from
The New York Times
that tells us how computers are enabling aquatic designers to create giant water slides that mimic roller coasters and eight-foot-high artificial waves.
1
In my modest collection, I have another article about the uses of personal computers for making presentations at corporate board meetings.
2
Another tells of how computer graphics help jurors to remember testimony better. Gregory Mazares, president of the graphics unit of Litigation Sciences, is quoted as saying, “We’re a switched-on, tuned-in, visually oriented society, and jurors tend to believe what they see. This technology keeps the jury’s attention by simplifying the material and by giving them little bursts of information.”
3
While Mr. Mazares is helping switched-on people to remember things, Morton David, chief executive officer of Franklin Computer, is helping them find any word in the Bible with lightning speed by producing electronic Bibles. (The word “lightning,” by the way, appears forty-two times in the New International version and eight times in the King James version. Were you so inclined, you could discover this for yourself in a matter of seconds.) This fact so dominates Mr. David’s imagination that he is quoted as saying, “Our technology may have made a change as momentous as the Gutenberg invention of movable type.”
4
And then there is an article that reports a computer’s use to make investment decisions, which helps you, among other things, to create “what-if” scenarios, although with how much accuracy we are not told.
5
In
Technology Review
, we find a description of how computers are used to help the police locate the addresses of callers in distress; a prophecy is made that in time police officers
will have so much instantly available information about any caller that they will know how seriously to regard the caller’s appeal for help.

One may well wonder if Charles Babbage had any of this in mind when he announced in 1822 (only six years after the appearance of Laënnec’s stethoscope) that he had invented a machine capable of performing simple arithmetical calculations. Perhaps he did, for he never finished his invention and started work on a more ambitious machine, capable of doing more complex tasks. He abandoned that as well, and in 1833 put aside his calculator project completely in favor of a programmable machine that became the forerunner of the modern computer. His first such machine, which he characteristically never finished, was to be controlled by punch cards adapted from devices French weavers used to control thread sequences in their looms.

Babbage kept improving his programmable machine over the next thirty-seven years, each design being more complex than the last.
6
At some point, he realized that the mechanization of numerical operations gave him the means to manipulate non-numerical symbols. It is not farfetched to say that Babbage’s insight was comparable to the discovery by the Greeks in the third century
B.C
. of the principle of alphabetization—that is, the realization that the symbols of the alphabet could be separated from their phonetic function and used as a system for the classification, storage, and retrieval of information. In any case, armed with his insight, Babbage was able to speculate about the possibility of designing “intelligent” information machinery, though the mechanical technology of his time was inadequate to allow the fulfillment of his ideas. The computer as we know it today had to await a variety of further discoveries and inventions, including the telegraph, the telephone, and the application of Boolean algebra to relay-based circuitry, resulting in Claude Shannon’s creation of digital logic circuitry. Today,
when the word “computer” is used without a modifier before it, it normally refers to some version of the machine invented by John von Neumann in the 1940s. Before that, the word “computer” referred to a person (similarly to the early use of the word “typewriter”) who performed some kind of mechanical calculation. As calculation shifted from people to machines, so did the word, especially because of the power of von Neumann’s machine.

Certainly, after the invention of the digital computer, it was abundantly clear that the computer was capable of performing functions that could in some sense be called “intelligent.” In 1936, the great English mathematician Alan Turing showed that it was possible to build a machine that would, for many practical purposes, behave like a problem-solving human being. Turing claimed that he would call a machine “intelligent” if, through typed messages, it could exchange thoughts with a human being—that is, hold up its end of a conversation. In the early days of MIT’s Artificial Intelligence Laboratory, Joseph Weizenbaum wrote a program called E
LIZA
, which showed how easy it was to meet Turing’s test for intelligence. When asked a question with a proper noun in it, E
LIZA’S
program could respond with “Why are you interested in,” followed by the proper noun and a question mark. That is, it could invert statements and seek more information about one of the nouns in the statement. Thus, E
LIZA
acted much like a Rogerian psychologist, or at least a friendly and inexpensive therapist. Some people who used E
LIZA
refused to believe that they were conversing with a mere machine. Having, in effect, created a Turing machine, Weizenbaum eventually pulled the program off the computer network and was stimulated to write
Computer Power and Human Reason
, in which, among other things, he raised questions about the research programs of those working in artificial intelligence; the assumption that whatever a computer
can
do, it
should
do; and the effects of computer technology
on the way people construe the world—that is, the ideology of the computer, to which I now turn.

The most comprehensive idea conveyed by the computer is suggested by the title of J. David Bolter’s book,
Turing’s Man
. His title is a metaphor, of course, similar to what would be suggested by saying that from the sixteenth century until recently we were “Gutenberg’s Men.” Although Bolter’s main practical interest in the computer is in its function as a new kind of book, he argues that it is the dominant metaphor of our age; it defines our age by suggesting a new relationship to information, to work, to power, and to nature itself. That relationship can best be described by saying that the computer redefines humans as “information processors” and nature itself as information to be processed. The fundamental metaphorical message of the computer, in short, is that we are machines—thinking machines, to be sure, but machines nonetheless. It is for this reason that the computer is the quintessential, incomparable, near-perfect machine for Technopoly. It subordinates the claims of our nature, our biology, our emotions, our spirituality. The computer claims sovereignty over the whole range of human experience, and supports its claim by showing that it “thinks” better than we can. Indeed, in his almost hysterical enthusiasm for artificial intelligence, Marvin Minsky has been quoted as saying that the thinking power of silicon “brains” will be so formidable that “If we are lucky, they will keep us as pets.”
7
An even giddier remark, although more dangerous, was offered by John McCarthy, the inventor of the term “artificial intelligence.” McCarthy claims that “even machines as simple as thermostats can be said to have beliefs.” To the obvious question, posed by the philosopher John Searle, “What beliefs does your thermostat have?,” McCarthy replied, “My thermostat has three beliefs—it’s too hot in here, it’s too cold in here, and it’s just right in here.”
8

Other books

The Scalp Hunters by David Thompson
Raw Silk by Delilah Devlin
A Few Quick Ones by P G Wodehouse
A Kiss With Teeth by Max Gladstone
Dead Giveaway by Brett, Simon
On the Oceans of Eternity by S. M. Stirling
Campanelli: Sentinel by Frederick H. Crook