How We Decide (28 page)

Read How We Decide Online

Authors: Jonah Lehrer

BOOK: How We Decide
13.63Mb size Format: txt, pdf, ePub

Unfortunately, the mind often surrenders to the temptation of shoddy top-down thinking. Just look at politics. Voters with strong partisan affiliations are a case study in how
not
to form opinions: their brains are stubborn and impermeable, since they already know what they believe. No amount of persuasion or new information is going to change the outcome of their mental debates. For instance, an analysis of five hundred voters with "strong party allegiances" during the 1976 campaign found that during the heated last two months of the contest, only sixteen people were persuaded to vote for the other party. Another study tracked voters from 1965 to 1982, tracing the flux of party affiliation over time. Although it was an extremely tumultuous era in American politics—there was the Vietnam War, stagflation, the fall of Richard Nixon, oil shortages, and Jimmy Carter—nearly 90 percent of people who identified themselves as Republicans in 1965 ended up voting for Ronald Reagan in 1980. The happenings of history didn't change many minds.

It's now possible to see why partisan identities are so persistent. Drew Westen, a psychologist at Emory University, imaged the brains of ordinary voters with strong party allegiances during the run-up to the 2004 election. He showed the voters multiple, clearly contradictory statements made by each candidate, John Kerry and George Bush. For example, the experimental subject would read a quote from Bush praising the service of soldiers in the Iraq war and pledging "to provide the best care for all veterans." Then the subject would learn that on the same day Bush made this speech, his administration cut medical benefits for 164,000 veterans. Kerry, meanwhile, was quoted making contradictory statements about his vote to authorize war in Iraq.

After being exposed to the political inconsistencies of both candidates, the subject was asked to rate the level of contradiction on a scale of 1 to 4, with 4 signaling a strong level of contradiction. Not surprisingly, the reactions of voters were largely determined by their partisan allegiances. Democrats were troubled by Bush's inconsistent statements (they typically rated them a 4) but found Kerry's contradictions much less worrisome. Republicans responded in a similar manner; they excused Bush's gaffes but almost always found Kerry's statements flagrantly incoherent.

By studying each of these voters in an fMRI machine, Westen was able to look at the partisan reasoning process from the perspective of the brain. He could watch as Democrats and Republicans struggled to maintain their political opinions in the face of conflicting evidence. After being exposed to the inconsistencies of their preferred candidate, the party faithful automatically recruited brain regions that are responsible for controlling emotional reactions, such as the prefrontal cortex. While this data might suggest that voters are rational agents calmly assimilating the uncomfortable information, Westen already knew that wasn't happening, since the ratings of Kerry and Bush were entirely dependent on the subjects' party affiliations. What, then, was the prefrontal cortex doing? Westen realized that voters weren't using their reasoning faculties to analyze the facts; they were using reason to preserve their partisan certainty. And then, once the subjects had arrived at favorable interpretations of the evidence, blithely excusing the contradictions of their chosen candidate, they activated the internal reward circuits in their brains and experienced a rush of pleasurable emotion. Self-delusion, in other words, felt really good. "Essentially, it appears as if partisans twirl the cognitive kaleidoscope until they get the conclusions they want," Westen says, "and then they get massively reinforced for it, with the elimination of negative emotional states and activation of positive ones."

This flawed thought process plays a crucial role in shaping the opinions of the electorate. Partisan voters are convinced that they're rational—it's the other side that's irrational—but actually, all of us are
rationalizers.
The Princeton political scientist Larry Bartels analyzed survey data from the 1990s to prove this point. During the first term of Bill Clinton's presidency, the budget deficit declined by more than 90 percent. However, when Republican voters were asked in 1996 what happened to the deficit under Clinton, more than 55 percent said that it had
increased.
What's interesting about this data is that so-called high-information voters—these are the Republicans who read the newspaper, watch cable news, and can identify their representatives in Congress—weren't better informed than low-information voters. (Many low-information voters struggled to name the vice president.) According to Bartels, the reason knowing more about politics doesn't erase partisan bias is that voters tend to assimilate only those facts that confirm what they already believe. If a piece of information doesn't follow Republican talking points—and Clinton's deficit reduction didn't fit the tax-and-spend liberal stereotype—then the information is conveniently ignored. "Voters think that they're thinking," Bartels says, "but what they're really doing is inventing facts or ignoring facts so that they can rationalize decisions they've already made." Once you identify with a political party, the world is edited to fit with your ideology.

At such moments, rationality actually becomes a liability, since it allows us to justify practically any belief. The prefrontal cortex is turned into an information filter, a way to block out disagreeable points of view. Let's look at an experiment done in the late 1960s by the cognitive psychologists Timothy Brock and Joe Balloun. Half of the subjects involved in the experiment were regular churchgoers, and half were committed atheists. Brock and Balloun played a tape-recorded message attacking Christianity, and, to make the experiment more interesting, they added an annoying amount of static—a crackle of white noise—to the recording. However, the listener could reduce the static by pressing a button, at which point the message suddenly became easier to understand.

The results were utterly predicable and rather depressing: the nonbelievers always tried to remove the static, while the religious subjects actually preferred the message that was harder to hear. Later experiments by Brock and Balloun that had smokers listening to a speech on the link between smoking and cancer demonstrated a similar effect. We all silence the cognitive dissonance through self-imposed ignorance.

This sort of blinkered thinking isn't a problem for only partisan voters and devout believers. In fact, research suggests that the same flaw also afflicts those people who are supposed to be most immune to such cognitive errors: political pundits. Even though pundits are trained professionals, presumably able to evaluate the evidence and base their opinions on the cold, hard facts—that's why we listen to them—they are still vulnerable to cognitive mistakes. Like partisan voters, they selectively interpret the data so that it proves them right. They'll distort their thought process until it leads to the desired conclusion.

In 1984, the University of California at Berkeley psychologist Philip Tetlock began what he thought would be a brief research project. At the time, the Cold War was flaring up again—Reagan was talking tough to the "evil empire"—and political pundits were sharply divided on the wisdom of American foreign policy. The doves thought Reagan was needlessly antagonizing the Soviets, while the hawks were convinced that the USSR needed to be aggressively contained. Tetlock was curious which group of pundits would turn out to be right, and so he began monitoring their predictions.

A few years later, after Reagan left office, Tetlock revisited the opinions of the pundits. His conclusion was sobering:
everyone was wrong.
The doves had assumed that Reagan's bellicose stance would exacerbate Cold War tensions and had predicted a breakdown in diplomacy as the USSR hardened its geopolitical stance. The reality, of course, was that the exact opposite happened. By 1985, Mikhail Gorbachev was in power. The Soviet Union began implementing a stunning series of internal reforms. The "evil empire" was undergoing
glasnost.

But the hawks didn't do much better. Even after Gorbachev began the liberalizing process, hawks tended to disparage the changes to the Soviet system. They said the evil empire was still evil; Gorbachev was just a tool of the politburo. Hawks couldn't imagine that a sincere reformer might actually emerge from a totalitarian state.

The dismal performance of these pundits inspired Tetlock to turn his small case study into an epic experimental project. He picked 284 people who made their living "commenting or offering advice on political and economic trends" and began asking them to make predictions about future events. He had a long list of pertinent questions. Would George Bush be reelected? Would there be a peaceful end to apartheid in South Africa? Would Quebec secede from Canada? Would the dot-com bubble burst? In each case, the pundits were asked to rate the probability of several possible outcomes. Tetlock then interrogated the pundits about their thought processes so he could better understand how they'd made up their minds. By the end of the study, Tetlock had quantified 82,361 different predictions.

After Tetlock tallied the data, the predictive failures of the pundits became obvious. Although they were paid for their keen insights into world affairs, they tended to perform
worse
than random chance. Most of Tetlock's questions had three possible answers; on average, the pundits had selected the right answer less than 33 percent of the time. In other words, a dart-throwing chimp would have beaten the vast majority of professionals. Tetlock also found that the most famous pundits in his study tended to be the least accurate, consistently churning out overblown and overconfident forecasts. Eminence was a handicap.

Why were these pundits (especially the prominent ones) so bad at forecasting the future? The central error diagnosed by Tetlock was the sin of
certainty,
which led the "experts" to mistakenly impose a top-down solution on their decision-making processes. In chapter 2, we saw examples of the true expertise that occurs when experience is internalized by the dopamine system. This results in a person who has a set of instincts that respond quickly to the situation at hand, regardless of whether that's playing backgammon or staring at a radar screen. The pundits in Tetlock's study, however, distorted the verdicts of their emotional brains, cherry-picking the feelings they wanted to follow. Instead of trusting their gut feelings, they found ways to disregard the insights that contradicted their ideologies. When pundits were convinced that they were right, they ignored any brain areas that implied they might be wrong. This suggests that one of the best ways to distinguish genuine from phony expertise is to look at how a person responds to dissonant data. Does he or she reject the data out of hand? Perform elaborate mental gymnastics to avoid admitting error? Everyone makes mistakes; the object is to learn from these mistakes.

Tetlock notes that the best pundits are willing to state their opinions in "testable form" so that they can "continually monitor their forecasting performance." He argues that this approach makes pundits not only more responsible—they are forced to account for being wrong—but also less prone to bombastic convictions, a crucial sign that a pundit isn't worth listening to. (In other words, ignore those commentators that seem too confident or self-assured. The people on television who are most certain are almost certainly going to be wrong.) As Tetlock writes, "The dominant danger [for pundits] remains hubris, the vice of closedmindedness, of dismissing dissonant possibilities too quickly." Even though practically all of the professionals in Tetlock's study claimed that they were dispassionately analyzing the evidence—everybody wanted to be rational—many of them were actually indulging in some conveniently cultivated ignorance. Instead of encouraging the arguments inside their heads, these pundits settled on answers and then came up with reasons to justify those answers. They were, as Tetlock put it, "prisoners of their preconceptions."

2

It feels good to be certain. Confidence is comforting. This desire to always be right is a dangerous side effect of having so many competing brain regions inside one's head. While neural pluralism is a crucial virtue—the human mind can analyze any problem from a variety of different angles—it also makes us insecure. You never know which brain area you should obey. It's not easy to make up your mind when your mind consists of so many competing parts.

This is why being sure about something can be such a relief. The default state of the brain is indecisive disagreement; various mental parts are constantly insisting that the other parts are wrong. Certainty imposes consensus on this inner cacophony. It lets you pretend that your entire brain agrees with your behavior. You can now ignore those annoying fears and nagging suspicions, those statistical outliers and inconvenient truths. Being certain means that you aren't worried about being wrong.

The lure of certainty is built into the brain at a very basic level. This is most poignantly demonstrated by split-brain patients. (These are patients who have had the corpus callosum—the nerve tissue that connects the two hemispheres of the brain—severed. The procedure is performed only rarely, usually to treat intractable seizures.) A typical experiment goes like this: using a special instrument, different sets of pictures are flashed to each of the split-brain patient's visual fields. (Because of our neural architecture, all information about the left visual field is sent to the right hemisphere, and all information about the right visual field is sent to the left hemisphere.) For example, the right visual field might see a picture of a chicken claw and the left visual field might see a picture of a snowy driveway. The patient is then shown a variety of images and asked to pick out the one that is most closely associated with what he or she has just seen. In a tragicomic display of indecisiveness, the split-brain patient's hands point to two different objects. The right hand points to a chicken (this matches the chicken claw that the left hemisphere witnessed), while the left hand points to a shovel (the right hemisphere wants to shovel the snow). The conflicting reactions of the patient reveals the inner contradictions of each of us. The same brain has come up with two very different answers.

Other books

The Hidden Flame by Janette Oke
The Burning Shore by Ed Offley
Secrets Behind Those Eyes by S.M. Donaldson
Morrigan by Laura DeLuca
The Vanished by Sarah Dalton
Dead Girl in Love by Linda Joy Singleton
Irresistible by Mary Balogh
Dreams and Shadows by C. Robert Cargill
The Keeper of Secrets by Julie Thomas
Cyrano de Bergerac by Edmond Rostand