Read The Folly of Fools Online

Authors: Robert Trivers

The Folly of Fools (31 page)

BOOK: The Folly of Fools
3.61Mb size Format: txt, pdf, ePub
ads

In the United States, the NTSB analyzes the causes of an airline disaster, relying on a series of objective data, cockpit and flight recorders, damage to aircraft, etc., to determine cause and then makes obvious recommendations. The theory is that this relatively modest investment in safety will pay for itself in future airplane design and pilot training to minimize accidents. In reality, everything works fine until the recommendation stage, when economic interests intervene to thwart the process. This is well demonstrated by the FAA’s inability to respond appropriately to the problem of ice buildup on smaller, commuter airplanes, a problem well known for more than twenty years yet claiming a new set of lives about every eight years, most recently on February 13, 2009, in Buffalo, New York, leaving fifty dead.

A deeper problem within the FAA was its unwillingness to reconsider basic standards for flying under icing conditions, as indeed had been requested by the pilots’ union. The FAA based its position on work done in the 1940s that had concluded that the chief problem was tiny droplets, not freezing rain (larger droplets), but science did not stop in the ’40s, and there was now plenty of evidence that freezing rain could be a serious problem. But this is one of the most difficult changes to make: to change one’s underlying system of analysis and logic. This could lead to wholesale redesign at considerable cost to—whom?—the airlines. So it was patchwork all the way around. There is also an analogy here to the individual. The deeper changes are the more threatening because they are more costly. They require more of our internal anatomy, behavior, and logic to be changed, which surely requires resources, may be experienced as painful, and comes at a cost.

The very symbol of a patch-up approach to safety is the fix the FAA approved for the well-proven habit of these planes in freezing ice to start to flip over. The fix was a credit-card-size piece of metal to be attached to each wing of a several-ton airplane (not counting passengers—or ice). This tiny piece of metal allegedly would alter airflow over the wings so as to give extra stability. No wonder the pilots’ union (representing those at greatest risk) characterized this as a Band-Aid fix and pointed out (correctly) that the FAA had “not gone far enough in assuring that the aircrafts can be operated safely under all conditions.” The union went on to say that the ATR airplanes had an “unorthodox, ill-conceived and inadequately designed” de-icing system. This was brushed aside by the FAA, a full six years before the Indiana crash, in which the airplane was fully outfitted with the FAA-approved credit-card-size stabilizers.

By the way, to outfit the entire US fleet of commuter turboprops with ice boots twice as large as before the Indiana crash would cost about $2 million. To appreciate how absurdly low this cost is, imagine simply dividing it by the number of paying customers on the ill-fated Indianapolis-to-Chicago trip and asking each customer in midair, “Would you be willing to spend $50,000 to outfit the entire American fleet of similar planes with the larger boot, or would you rather die within the next hour?” But this is not how the public-goods game works. The passengers on the Chicago flight do not know it is their flight out of 100,000 that will go down. Rather, the passengers know they have a 0.99999 chance of being perfectly safe even if they do nothing. Let someone else pay. Even so, I bet everyone would get busy figuring out how to raise the full amount. I certainly would. Of course, if each passenger only had to help install the boots on his or her own plane, about $300 per passenger would suffice. The point is that for trivial sums of money, the airlines routinely put passengers at risk. Of course, they can’t put it this way, so they generate assertions and “evidence” by the bushel to argue that all is well, indeed that every reasonable safety precaution is being taken. Six years before this crash, British scientists measured airflow over icy wings and warned that it tended to put the craft at risk, but these findings were vehemently derided as being wholly unscientific, even though they were confirmed exactly by the NTSB analysis of the Indianapolis–Chicago crash.

Finally, a series of trivial devices were installed in the cockpit and new procedures were mandated for pilot behavior. For example, a device giving earlier warning of icing was installed and pilots were told not to fly with autopilot when this light is on, precisely to avoid being surprised by a sudden roll to one side as the autopilot disengages. But of course this does not address the problem of loss of control under icing. From the very first Italian crash over the Alps, when one of the pilots lashed out at the control system that failed to respond to his efforts with an ancient curse on the system’s designers and their ancestors, it has been known that conscious effort to maintain control is not sufficient. And of course, pilots may make matters worse for themselves in a bad situation. In the Buffalo crash, the pilots apparently made a couple of errors, including keeping the plane on autopilot when they lowered their landing gear and deployed the flaps that increase lift. Suddenly there was a severe pitch and roll, suggestive of ice, which in fact had built up on both the wings and the windshield, blocking sight. Although the NTSB attributed the crash to pilot error, the fact that ice had built up, followed by the familiar pitch and roll, suggests a poorly designed airplane as well.

In short, a system has developed in which the pilot may make no errors—and yet the plane can still spin out of control. It is ironic, to say the least, that a basic design problem that deprives a pilot of control of the airplane is being solved by repeatedly refining the pilot’s behavior in response to this fatal design flaw. A pilot’s failure to do any of the required moves, for example, disengage autopilot, will then be cited as the cause. No problem with the airplane; it’s the pilot! But is this not a general point regarding self-deception? In pursuing a path of denial and minimization, the FAA traps itself in a world in which each successive recommendation concerns more and more pilot behavior than actual aircraft design changes. Thus does self-deception lay the foundations for disaster.

Consider an international example.

THE US APPROACH TO SAFETY HELPS CAUSE 9/11

 

The tragedy of 9/11 had many fathers. But few have been as consistent in this role as the airlines themselves, at least in preventing the actual aircraft takeovers on which the disaster was based. This is typical of US industrial policy: any proposed safety change comes with an immediate threat of bankruptcy. Thus, the automobile industry claimed that seat belts would bankrupt them, followed by airbags, then child-safety door latches, and whatnot. The airline’s lobbying organization, the Air Transport Association, has a long and distinguished record of opposing almost all improvements in security, especially if the airlines have to pay for them. From 1996 to 2000 alone, the association spent $70 million opposing a variety of sensible (and inexpensive) measures, such as matching passengers with bags (routine in Europe at the time) or improving security checks of airline workers. They opposed reinforced cabin doors and even the presence of occasional marshals (since the marshals would occupy nonpaying seats). It was common knowledge that the vital role of airport screening was performed poorly by people paid at McDonald’s wages—but without their training—yet airlines spent millions fighting any change in the security status quo. Of course, a calamity such as 9/11 could have severe economic effects as people en masse avoided a manifestly dangerous mode of travel, but the airlines merely turned around and beseeched the government for emergency aid, which they got.

It seems likely that much of this is done “in good conscience,” that is, the lobbyists and airline executives easily convince themselves that safety is not being compromised to any measurable degree, because otherwise they would have to live with the knowledge that they were willing to kill other people in the pursuit of profit. From an outsider’s viewpoint this is, of course, exactly what they are doing. The key fact is that there is an economic incentive to obscure the truth from others—and simultaneously from self.

Only four years after 9/11, the airlines were loudly protesting legislation that would increase a federal security fee from $2.50 to $5.50, despite numerous surveys showing that people would happily pay $3 more per flight to enhance security. Here the airlines did not pay directly but feared only the indirect adverse effects of this trivial price increase. Note that corporate titans appear to slightly increase their own chances of death to hoard money, but with the increasing use of corporate jets, even this is not certain.

We see again patterns of deceit and self-deception at the institutional and group levels that presumably also entrain individual self-deception within the groups. Powerful economic interests—the airlines—prevent safety improvements of vital importance to a larger economic unit, the “flying public,” but this unit is not acting as a unit. The pilots have their own organization and so of course do the (individually) powerful airlines, but the flying public exerts its effects one by one, in choice of airline, class of travel, destination, and so on—not in the relative safety of the flight, about which the public typically knows nothing. The theory is that the government will act on their behalf. Of course, as we have seen, it does not. Individuals within two entities should be tempted to self-deception—within the airlines that argue strenuously for continuation of their defective products and within the FAA, which, lacking a direct economic self-interest, is co-opted by the superior power of the airlines and acts as their rationalizing agent. In the case of NASA, those who sell space capsules to the public and to themselves never actually ride in them.

Regarding the specific event of 9/11 itself, although the United States already had a general history of inattention to safety, the George W. Bush administration even more dramatically dropped the ball in the months leading up to 9/11—first downgrading Richard Clarke, the internal authority on possible terrorist attacks, including specifically those from Osama bin Laden. The administration stated they were interested in a more aggressive approach than merely “swatting at flies” (bin Laden here being, I think, the fly). Bush himself joked about the August 2001 memo saying that bin Laden was planning an attack within the United States. Indeed, he denigrated the CIA officer who had relentlessly pressed (amid code-red terrorist chatter) to give the president the briefing at his Texas home. “All right,” Bush said when the man finished. “You’ve covered your ass now,” as indeed he had, but Bush left his own exposed. So his administration had a particular interest in focusing only on the enemy, not on any kind of missed signals or failure to exercise due caution. Absence of self-criticism converts attention from defense to offense.

THE
CHALLENGER
DISASTER

 

On January 28, 1986, the
Challenger
space vehicle took off from Florida’s Kennedy Space Center and seventy-three seconds later exploded over the Atlantic Ocean, killing all seven astronauts aboard. The disaster was subject to a brilliant analysis by the famous physicist Richard Feynman, who had been placed on the board that investigated and reported on the crash. He was known for his propensity to think everything through for himself and hence was relatively immune to conventional wisdom. It took him little more than a week (with the help of an air force general) to locate the defective part (the O-ring, a simple part of the rocket), and he spent the rest of his time trying to figure out how an organization as large, well funded, and (apparently) sophisticated as NASA could produce such a shoddy product.

Feynman concluded that the key was NASA’s deceptive posture toward the United States as a whole. This had bred self-deception within the organization. When NASA was given the assignment and the funds to travel to the moon in the 1960s, the society, for better or worse, gave full support to the objective: beat the Russians to the moon. As a result, NASA could design the space vehicle in a rational way, from the bottom up—with multiple alternatives tried at each step—giving maximum flexibility, should problems arise, as the spacecraft was developed. Once the United States reached the moon, NASA was a $5 billion bureaucracy in need of employment. Its subsequent history, Feynman argued, was dictated by the need to create employment, and this generated an artificial system for justifying space travel—a system that inevitably compromised safety. Put more generally, when an organization practices deception toward the larger society, this may induce self-deception within the organization, just as deception between individuals induces individual self-deception
.

The space program, Feynman argued, was dominated by a need to generate funds, and critical design features, such as manned flight versus unmanned flight, were chosen precisely because they were costly. The very concept of a reusable vehicle—the so-called shuttle—was designed to appear inexpensive but was in fact just the opposite (more expensive, it turned out, than using brand-new capsules each time). In addition, manned flight had glamour appeal, which might generate enthusiasm for the expenses. But since there was very little scientific work to do in space (that wasn’t better done by machines or on Earth), most was make-do work, showing how plants grow absent gravity (gravity-free zones can be produced on Earth at a fraction of the cost) and so on. This was a little self-propelled balloon with unfortunate downstream effects. Since it was necessary to sell this project to Congress and the American people, the requisite dishonesty led inevitably to internal self-deception. Means and concepts were chosen for their ability to generate cash flow and the apparatus was then designed top-down. This had the unfortunate effect that when a problem surfaced, such as the fragile O-rings, there was little parallel exploration and knowledge to solve the problem. Thus NASA chose to minimize the problem and the NASA unit assigned to deal with safety became an agent of rationalization and denial, instead of careful study of safety factors. Presumably it functioned to supply higher-ups with talking points in their sales pitches to others and to themselves.

BOOK: The Folly of Fools
3.61Mb size Format: txt, pdf, ePub
ads

Other books

The Best Man: Part One by Lola Carson
Shadowbound by Dianne Sylvan
Striking Distance by Pamela Clare
Secrets by Linda Chapman
Brave Company by Hill, David
Ninja Boy Goes to School by N. D. Wilson
A Thousand Ways to Please a Husband With Bettina's Best Recipes by Louise Bennett Weaver, Helen Cowles Lecron, Maggie Mack