The Victory Lab (12 page)

Read The Victory Lab Online

Authors: Sasha Issenberg

BOOK: The Victory Lab
3.81Mb size Format: txt, pdf, ePub

The two had had little interaction in the intervening years, but Malchow’s impressions of his old boss were confirmed as soon as Gore arrived for the presentation. Gore announced that he had just come from a meeting in his West Wing office with physicist Stephen Hawking,
with whom the vice president enjoyed discussing such arcana as how cosmology supercomputers could measure previously imperceptible antigravitational forces.
Manna from heaven
, Malchow thought. “That’s great,” he told Gore. “Because I am here to talk about putting some science into this campaign.”

Malchow described his CHAID technique, and he thought he saw Gore react approvingly. But when Malchow tried to follow up afterward with the campaign manager, Craig Smith, he never heard back. A simultaneous effort to convince the DNC to impose experimental controls on its mail program—what had become a quadrennial quest for Malchow—came up empty, too. Malchow became convinced that the political profession could never muster the skepticism to examine its own practices. The revolution would have to find its momentum elsewhere.

THE NEW HAVEN EXPERIMENTS

D
on Green was still new to the Yale political science department when he began to suspect that his chosen discipline was intellectually bankrupt. In the 1950s, political scientists had started talking like economists, describing politicians and citizens as rational beings who acted to maximize their self-interest. Voters were believed to peruse a ballot the same way they examined a store shelf, calculating the benefits each product had presented and checking the box next to the one offering the best value. “Voters and consumers are essentially the same people,” the economist Gordon Tullock wrote in his 1976 book
The Vote Motive
. “Mr. Smith buys and votes; he is the same man in the supermarket and in the voting booth.” By the time Green began teaching in 1989, such thinking was pervasive among his peers. They saw politics as a marketplace where people and institutions compete for scarce power and resources with the clear, consistent judgment of accountants.

This detached view of human behavior was particularly galling to
Green, who was trained as a political theorist but found his greatest joy amid sophisticated board games. Growing up in Southern California, Green had played Civil War and World War II games with his brothers, a diversion he partly credits for his later interest in politics and history. When he first arrived at Yale, Green bonded with students and colleagues through games, which filled the interstices between classes and office hours, with a single competitive session often stretched over weeks. In the late 1990s, Green was playing at his colonial home in New Haven with his seven-year-old son and five-year-old daughter, using the plastic construction toy K’nex to build a lattice-like structure. The kids imagined spiderlike monsters moving from one square to the next. Green started to visualize from this a new board game, in which Erector-set-like limbs could be grafted onto basic checkers-style coins and every piece would become dynamic. Tinkering in his spare time, Green created a deceptively simple two-player game on a two-dimensional grid. At each turn, a participant could move one of his or her starting pieces or add a limb that would increase its power by allowing it to move in a new direction. “When you’re playing chess, you play the hand you’re dealt, where here you build your own pieces,” says Green. “Imagine a game of chess where all the pieces start out as pawns.” To bring his game to market, Green needed a prototype, so he taught himself woodworking and built a studio in his basement—the first time in his life, he realized, that he had done anything truly physical. Within a year, a Pennsylvania company had agreed to produce Octi—in which each turn required a player to make a choice between moving and building, all while trying to anticipate the opponent’s response. Green described it as “an abstract idea of a game about mobilization.”

Watching people play Octi only illustrated what Green already believed about their behavior. Even in a board game, human beings were incapable of logically assessing all of their options and making the optimal decision each time. Yet rational-choice scholars thought this is what people did every time they participated in politics—and what frustrated Green most was that these claims were purely speculative. The rational-choicers
had built entire theoretical models to explain how institutions from Congress to the military were supposed to function. The more closely the rational-choice model was applied to the way politics actually worked, the less it seemed able to explain. In 1994, along with his colleague Ian Shapiro, Green coauthored a book titled
Pathologies of Rational Choice Theory
, in which he argued that the ascendant movement in political science rested on a series of assumptions that had not been adequately demonstrated through any real-world research. “There was reason to think the whole thing might be a house of cards,” says Green.

When political scientists did try to explain real-world events, Green didn’t think the results were much better. The principal tool of so-called observational research was correlation, a statistical method for seeking out connections between sets of data. Academics relied on a declaration of “statistical significance” to explain just about everything, yet demonstrating a correlation rarely illuminated much. For instance, one element that defined twentieth-century politics was the fact that people who lived in urban areas voted overwhelmingly Democratic. Were cities pulling their inhabitants to the left, or were liberal people drawn to cities? Or was there some other explanation altogether for the pattern? Perhaps most frustrating of all to Green and a junior colleague, Alan Gerber, was the inability of their discipline to even justify the individual decision to vote at all. Casting a ballot is the basic act of political behavior in a democracy, and yet political science offered little reason to explain why people would bother when there was no legal requirement. After all, considering the economic logic favored by rational-choicers, voting carried a known set of costs (the time and inconvenience of registering, learning about the candidates and going to the polling station) and little in the way of benefits (a tiny probability that an individual’s vote would affect government policies). “There was good reason to think no one should vote,” Green says.

Political scientists had toyed with this question for a generation, and by 1998 the most sophisticated thinking relied on the proposition that, as election day approached, voters calculated the likelihood they might be
the pivotal vote deciding the race. In other words, before changing her plans to stop at a local firehouse on a rainy Tuesday in November, a harried working mother paused to assess the likelihood that she would cast the tie-breaking vote in a race with thousands, or even millions, of other citizens each making his or her own simultaneous calculations. “Is that how a typical voter thinks when he’s casting his ballot?” Gerber asked.

If we can’t explain what makes people vote, he and Green thought, let’s see if we can affect their calculus behind doing so. To bolster their claim that basic political theories were unproven in the real world, Green and Gerber decided to do something political scientists were not supposed to. They would conduct an experiment.

IN THE LATE SUMMER
of 1998, Green and Gerber sat in adjacent, wood-paneled offices at Yale’s Institution for Social and Policy Studies, sheltered in a Richardsonian Romanesque building that was once a clubhouse for the secret society Wolf’s Head, and scoured all they could find of the experimental tradition in political science. As an undergraduate at Yale, Gerber had learned how field experiments had been taken up by policymakers, notably those developing Lyndon B. Johnson’s Great Society, to test the effects of new social programs. Perhaps the most famous were a series of experiments coordinated in 1968 by the White House to test the viability of a so-called negative income tax. The experiment, designed by a graduate student at the Massachusetts Institute of Technology, would randomize households below the poverty line to receive bonus payments and then measure their levels of employment afterward. The target was a major behavioral riddle that vexed the welfare state—how could the government give aid without undercutting the motivation to work?—and an empirical approach to solving it proved popular across the ideological divide.
Running the federal Office of Economic Opportunity for the two years in which it oversaw the experiments were its director, Donald Rumsfeld, and
his assistant, Dick Cheney. “In the deep recesses of my mind was the notion that some kind of large-scale experimentation was a thing that social scientists at one point or another did,” says Gerber.

But that interest had never really pervaded the study of elections. Gerber and Green were surprised to find that the use of field experiments had begun, and effectively ended, with the publication of Harold Gosnell’s
Getting Out the Vote
in 1927, and they were eager about the possibilities that opened up for them. “There are very few things in academia that are more exciting,” says Green, “than doing things that either haven’t been done before or haven’t been done in a very long time.” So he and Gerber began to read more generally about the origins of field experiments in other areas. The term hinted at the history: the earliest randomized trials grew out of searches for fertilizer compounds conducted by nineteenth-century researchers for the nascent chemical industry.

Each season, scientists at the Rothamsted Agricultural Experimentation Station in England would take a blend of compounds such as phosphate and nitrogen salts, alter the ratio of the chemicals, and sprinkle it over
plots of rye, wheat, and potato planted in the clay soil of the estate north of London. One year’s plant growth would be compared with the next, and the difference was recorded as an index of fertility for each chemical mixture. When the pipe-smoking mathematician R. A. Fisher arrived in 1919 and examined ninety years of experiments, he realized that the weather probably had had more to do with the variations in growth than the chemical blend.
Even though Rothamsted researchers tried to discount for the volume of rain in a given season, there were many other things that varied unpredictably and even imperceptibly from year to year, like soil quality or sun or insect activity. Fisher redrew the experiment so that different chemical ratios could be compared with one another simultaneously. He split existing plots into many small slivers and then randomly assigned them different types and doses of fertilizer that could be dispensed at the same time. The size and proximity of the plots ensured that, beyond the varying fertilizer treatments, they would all experience the same external factors.

Not far from Fisher, a young economist named Austin Bradford Hill was growing similarly impatient with the limits of statistics to account for cause and effect in health care. In 1923, for example, Hill received a grant from Britain’s Medical Research Council that sent him to the rural parts of Essex, east of London, to investigate why the area suffered uncommonly high mortality rates among young adults. Hill returned from Essex with an explanation that had little to do with the quality of medical care: the healthiest members of that generation quickly left the country to live in towns and cities. The whole British medical system was built on similarly misleading statistics, and Hill worried that the faulty inferences drawn from them put people’s health at risk. Hill joined the Medical Research Council’s scientific staff and began writing articles in the
Lancet
explaining to doctors in straightforward language what concepts like mean, median, and mode meant.

But even as he worked to educate the medical community about how to use the statistics it had—most from the rolls of life and death maintained by national registrars—Hill knew the quality of the numbers themselves was a potentially bigger problem. In medicine, “
chance was regarded as an enemy of knowledge rather than an ally,” writes historian Harry M. Marks. When clinicians ran controlled experiments, they looked to find two patients as similar as possible in every measurable respect, treat them differently, and attribute the outcome to the care they received.
But Hill thought that this matching process—or alternating treatments on patients in the order they were admitted to a hospital—would always let uncontrolled variables leak in. “
It is obvious that no statistician can be aware of all the factors that are, or may be, relevant,” he wrote.

Other books

Comfort Object by Annabel Joseph
Only One Man Will Do by Fiona McGier
The Judgement Book by Simon Hall
Vampires by Steakley, John
Lost Paradise by Cees Nooteboom
Cold Eye of Heaven, The by Hickey, Christine Dwyer