Super Crunchers (26 page)

Read Super Crunchers Online

Authors: Ian Ayres

BOOK: Super Crunchers
6.1Mb size Format: txt, pdf, ePub

By the way, this is a question you can answer, too. What do you think is the probability that a woman who has a positive mammography has breast cancer? Think about it for a minute.

In study after study, most physicians tend to estimate that the probability of cancer is about 75 percent. Actually, this answer is about ten times too high. Most physicians don't know how to apply Bayes' equation.

We can actually work out the probability (and learn Bayes to boot) if we translate the probabilities into frequencies. First, imagine a sample of 1,000 women who are screened for breast cancer. From the 1 percent (prior) probability, we know that 10 out of every 1,000 women who get screened will actually have breast cancer. Of these 10 women with breast cancer, 8 will have a positive mammogram. We also know that of the 990 women without breast cancer who take the test, 99 will have a false positive result. Can you figure out the probability that a woman with a positive test will have breast cancer now?

It's a pretty straightforward calculation. Eight out of the 107 positive tests (8 true positives plus 99 false positives) will actually have cancer. So what statisticians call the posterior or updated probability of cancer conditioned upon a positive mammogram becomes 7.5 percent (8 divided by 107). Bayes' theorem tells us that the prior 1 percent probability of cancer doesn't jump to 70 or 75 percent—it increases to 7.5 percent.

People who don't understand Bayes tend to put too much emphasis on the 80 percent chance that a woman with cancer will test positive. Most physicians studied seem to think that if 80 percent of women with breast cancer have positive mammographies, then the probability of a woman with a positive mammography having breast cancer must be around 80 percent. But Bayes' equation tells us why this intuition is wrong. We have to put a lot more weight on the original, unconditional fraction of women with breast cancer (the prior probability), as well as the possibility that women without breast cancer will receive false positives.

Can you figure out what the probability of cancer is for the women whose mammogram comes back negative? If you can (and the answer is provided below),
*7
you're well on your way to mastering the updating idea.

When All Is Said and Done

Knowing about the 2SD rule and Bayes' theorem can improve the quality of your own decisions. Yet there are many more tools that you would need to master to become a bona fide Super Cruncher or even a reasonable consumer of Super Crunching. You'd need to become comfortable with terms like heteroskedasticity and omitted variable bias. This book isn't an end, it's an invitation. If you're hooked, the endnotes contain suggested readings for the future.

Like me, Ben Polak is passionate about the need to inculcate a basic understanding of statistics in the general public. “We have to get students to learn this stuff,” he says. “We have to get over this phobia and we have to get over this view that somehow statistics is illiberal. There is this crazy view out there that statistics are right-wing.” The stories in this book refute the idea that Super Crunching is part of some flattening right-wing conspiracy (or any other ideological hegemony). Super Crunching is what gives the Poverty Action Lab their persuasive power to improve the world. One can crunch numbers and still have a passionate and caring soul. You can still be creative. You just have to be willing to put your creativity and your passions to the test to see if they really work.

I've been speculating about how in the future intuition, expertise, and data-based analysis will interact. We'll see proliferation and deep resistance. We'll see both progress and advantage-taking. But the seeds of the future can be seen in the past. These are the themes that we've already seen play out time and again in the book. They were also present way back in 1957, in one of the lesser-known Katharine Hepburn–Spencer Tracy movies,
Desk Set.

In the movie, Bunny Watson (played by Hepburn) is the supersmart head of a reference library for a large TV network. She's responsible for researching and answering questions on all manner of topics, such as the names of Santa's reindeer. Onto the scene comes Richard Sumner (played by Spencer Tracy), the Super Crunching inventor of the EMERAC computer, which Sumner affectionately nicknames “Emmy.” The movie pits Bunny's encyclopedic memory against the immense “electronic brain,” playing on the same kind of fears that we see today—the fear that in an increasingly automated world, traditional expertise will become irrelevant. Bunny and others are worried that they'll lose their jobs.

It's useful to reflect on just how lopsided the Bunny/Emmy competition has become. We now take for granted that computers are better at retrieving bits of information. Indeed, one need look no further than Google to realize that this is a fight which no human researcher has a prayer of winning. The Wikipedia page for the movie lists all the informational challenges that appeared in the movie itself with links to open source answers. Need to know the third stanza of the poem “Curfew Must Not Ring Tonight,” by Rose Hartwick Thorpe? We all know the fastest way to find the answer (and it's not by phoning a friend).

I predict that we will increasingly come to look at the prediction competition between experts and Super Crunching equations in much the same way. It will just be accepted that it's not a fair fight. Super Crunching computers are much better than humans at figuring out what predictive weights to put on particular causal factors. When there are enough data, Super Crunching is going to win.

But
Desk Set
is also instructive in the way that it resolves the technological tension. Hepburn's character is not as quick at data retrieval as the EMERAC. In the end, however, the computer doesn't render people like Bunny useless. Her usefulness just changes, and the computer ultimately helps make her and other reference librarians more effective and efficient. The moral is that computers are tools that will make life easier and more fun. The movie isn't very subtle about this point and, indeed, right after the credits, there is a message about how helpful IBM was in making the movie.

Subtle or not, I think the same point applies to the rise of Super Crunching. In the end, Super Crunching is not a substitute for intuition, but a complement. This new way to be smart will not relegate humans to the trash heap of history. I'm not quite as sanguine about the future of traditional expertise. You don't need to watch the movie to know that technological dinosaurs who eschew the web are at a severe disadvantage in trying to retrieve information. The same thing goes for experts who resist the siren songs of Super Crunched predictions. The future belongs to those who can comfortably inhabit both worlds.

Our intuitions, our experiences, and, yes, statistics should work together to produce better choices. Of course, intuitions and experiential rules of thumb will still drive many of our day-to-day decisions. I doubt that we will see quantitative studies on the best way to fry an egg or peel a banana. Still, the experiences of thousands of other similarly situated people reduced to analyzable numbers can help us in ways that will be increasingly hard to ignore.

ACKNOWLEDGMENTS

The book has only my name on the spine. But let me raise a series of toasts to its many, many coauthors:

To Joyce Finan, my high-school math teacher, who told me that I'd never be any good at numbers because my handwriting was too messy.

To Jerry Hausman, my MIT econometrics professor, who taught me that there are some proofs that aren't worth knowing.

To Bob Bennett, Bill Felstiner, and the American Bar Foundation, who helped fund my first Super Crunching tests of car negotiations.

To my full-time data-crunching assistants, Fred Vars, Nasser Zakariya, Heidee Stoller, and most recently, Isra Bhatty. These trusting souls signed on for a year of nonstop crunching—usually on a dozen or more projects that were progressing simultaneously. Arlo Guthrie once said that it was hard for him to keep up his guitar skills when he had such talented side musicians. I know how he feels.

To Orley Ashenfelter, Judy Chevalier, Dick Copaken, Esther Duflo, Zig Engelmann, Paul Gertler, Dean Karlin, Larry Katz, Steve Levitt, Jennifer Ruger, Lisa Sanders, Nina Sassoon, Jody Sindelar, Petra Todd, Joel Waldfogel, and the many other people who gave of their time to make this book better.

To my research assistants at Yale Law School, Rebecca Kelly, Adam Banks, and Adam Goldfarb, who with unreasonable energy and care read and reread every word in this book.

To Lynn Chu and Glen Hartley, my agents, who beat me about the head until I had a reasonable proposal. Thank you for not giving up on me.

To John Flicker, my editor, who knows the value of carrots and sticks. Rarely has my writing improved so much after the initial draft, and you are the reason why.

To Peter Siegelman and John Donohue, to whom this book is dedicated. My memory of our years in Chicago still burns bright.

To Jennifer Brown, my best friend, who has sat with me in the wee hours of the morning as these pages grew into a manuscript.

And finally, to my other coauthors, Bruce Ackerman, Barry E. Adler, Antonia Ayres-Brown, Henry Ayres-Brown, Katharine Baker, Joe Bankman, John Braithwaite, Richard Brooks (who also gave detailed comments on the manuscript), Jeremy Bulow, Stephen Choi, Peter Cramton, Arnold Diethelm, Laura Dooley, Aaron Edlin, Sydney Foster, Matthew Funk, Robert Gaston, Robert Gertner, Paul M. Goldbart, Gregory Klass, Paul Klemperer, Sergey I. Knysh, Steven D. Levitt, Jonathan Macey, Kristin Madison, F. Clayton Miller, Edward J. Murphy, Barry Nalebuff, Eric Rasmussen, Stephen F. Ross, Colin Rowat, Peter Schuck, Stewart Schwab, Richard E. Speidel, and Eric Talley, who through the years have provided me with both intellectual and spiritual succor. Because of you, this is one gearhead who has found that a life with numbers and passion really can mix.

NOTES

INTRODUCTION

Ashenfelter vs. Parker:
Stephanie Booth, “Princeton Economist Judges Wine by the Numbers: Ashenfelter's Analyses in ‘Liquid Assets' Rarely off the Mark,”
Princeton Packet
, Apr. 14, 1998, http://www.pac pubserver.com/new/news/4-14-98/wine.htm; Andrew Cassel, “An Economic Vintage that Grows on the Vine,”
Phila. Inquirer
, Jul. 23, 2006; Jay Palmer, “A Return Visit to Earlier Stories: Magnifique! The Latest Bordeaux Vintage Could Inspire Joyous Toasts,”
Barron's
, Dec. 15, 1997, p. 14; Jay Palmer, “Grape Expectations: Why a Professor Feels His Computer Predicts Wine Quality Better than All the Tasters in France,”
Barron's
, Dec. 30, 1996, p. 17; Peter Passell, “Wine Equation Puts Some Noses Out of Joint,”
N.Y. Times
, Mar. 4, 1990, p. A1; Marcus Strauss, “The Grapes of Math,”
Discover Magazine
, Jan. 1991, p. 50; Lettie Teague, “Is Global Warming Good for Wine?”
Food and Wine
, Mar. 2006, http://www.food andwine.com/articles/is-global-warming-good-for-wine; Thane Peterson, “The Winemaker and the Weatherman,”
Bus. Wk. Online
, May 28, 2002, http://www.businessweek.com/bwdaily/dnflash/ may2002/nf20020528_2081.htm.

Bill James and baseball expertise:
Michael Lewis,
Moneyball: The Art of Winning an Unfair Game
(2003). For James's most recent statistical collection, see Bill James,
The Bill James Handbook 2007
(2006).

Brown debuts for A's:
official site of Oakland A's player information, http://oakland.athletics.mlb.com/team/player-career.jsp?player_id=425852.

On Kasparov and Deep Blue:
Murray Campbell, “Knowledge Discovery in Deep Blue: A Vast Database of Human Experience Can't Be Used to Direct a Search,” 42
Comm. ACM
65 (1999).

Testing which policies work:
Daniel C. Esty and Reece Rushing,
Data- Driven Policymaking
, Center for American Progress (Dec. 2005).

The impact of LoJack:
Ian Ayres and Steven D. Levitt, “Measuring the Positive Externalities from Unobservable Victim Precaution: An Empirical Analysis of LoJack,” 113
Q. J. Econ
. 43 (1998).

CHAPTER 1

Preference engine problems:
Alex Pham and Jon Healey, “Telling You What You Like: ‘Preference Engines' Track Consumers' Choices Online and Suggest Other Things to Try. But Do They Broaden Tastes or Narrow Them?”
L.A. Times
, Sep. 20, 2005, p. A1; Laurie J. Flynn, “Amazon Says Technology, Not Ideology, Skewed Results,”
N.Y. Times
, Mar. 20, 2006, p. 8; Laurie J. Flynn, “Like This? You'll Hate That. (Not All Web Recommendations Are Welcome.),”
N.Y. Times
, Jan. 23, 2006, p. C1; Ylan Q. Mui, “Wal-Mart Blames Web Site Incident on Employee's Error,”
Wash. Post
, Jan. 7, 2006, p. D1.

The exploitation of preference distributions:
Chris Anderson,
The Long Tail: Why the Future of Business Is Selling Less of More
(2006); Cass Sunstein,
Republic.com
(2001); Nicholas Negroponte,
Being Digital
(1995); Carl S. Kaplan, “Law Professor Sees Hazard in Personalized News,”
N.Y. Times
, Apr. 13, 2001; Cass R. Sunstein, “Boycott the Daily Me!: Yes, the Net Is Empowering. But It Also Encourages Extremism—and That's Bad for Democracy,”
Time
, Jun. 4, 2001, p. 84; Cass R. Sunstein, “The Daily We: Is the Internet Really a Blessing for Democracy?”
Boston Rev.
, Summer 2001.

Accuracy of crowd predictions:
James Surowiecki,
The Wisdom of Crowds
(2004); Michael S. Hopkins, “Smarter Than You,” Inc.com, Sep. 2005, http://www.inc.com/magazine/20050901/mhopkins.htm.

Internet dating loves Super Crunching:
Steve Carter and Chadwick Snow, eHarmony.com, “Helping Singles Enter Better Marriages Using Predictive Models of Marital Success,” Presentation to 16th Annual Convention of the American Psychological Society (May 2004), http://static.eharmony.com/images/ eHarmony-APS-handout.pdf; Jennifer Hahn, “Love Machines,” AlterNet, Feb. 23, 2005; Rebecca Traister, “My Date with Mr. eHarmony,” Salon.com, Jun. 10, 2005, http://dir.salon.com/story/mwt/feature/2005/06/10/warren/index.htm; “Dr. Warren's Lonely Hearts Club: EHarmony Sheds its Mom-and-Pop Structure, Setting the Stage for an IPO,”
Bus. Wk.
, Feb. 20, 2006; Press Release, eHarmony.com, “Over 90 Singles Marry Every Day on Average at eHarmony,” Jan. 30, 2006, http://www.eharmonyreviews.com/news2.htm; Garth Sundem,
Geek Logik: 50 Foolproof Equations for Everyday Life
(2006).

Prohibition of race discrimination in contracting:
Civil Rights Act of 1866, codified as amended at 42 U.S.C. 1981 (2000).

Super Crunching in employment decisions:
Barbara Ehrenreich,
Nickel and Dimed: On (Not) Getting By in America
(2001).

Companies use Super Crunching to maximize profitability:
Thomas H. Davenport, “Competing on Analytics,”
Harv. Bus. Rev.
, Jan. 2006; telephone interview with Scott Gnau, Teradata vice president and general manager, Oct. 18,2006.

Corporate defection detection:
Keith Ferrell, “Landing the Right Data: A Top-Flight CRM Program Ensures the Customer Is King at Continental Airlines,”
Teradata Magazine
, June 2005; “Continental Airlines Case Study: ‘Worst to First,'” Teradata White Paper (2005), http://www.teradata.com/t/ page/133201/index.htm; Deborah J. Smith, “Harrah's CRM Leaves Nothing to Chance,”
Teradata Magazine
, Spring 2001; Mike Freeman, “Data Company Helps Wal-Mart, Casinos, Airlines Analyze Customers,”
San Diego Union Trib.
, Feb. 24,2006.

Ideas for increasing consumer information:
Barry Nalebuff and Ian Ayres,
Why Not?: How to Use Everyday Ingenuity to Solve Problems Big and Small
(2003); Peter Schuck and Ian Ayres, “Car Buying, Made Simpler,”
N.Y. Times
, Apr. 13, 1997, p. F12.

Farecast provides Super Crunching services to the consumer:
Damon Darlin, “Airfares Made Easy (or Easier),”
N.Y. Times
, Jul. 1, 2006, p. C1; Bruce Mohl, “While Other Sites List Airfares, Newcomer Forecasts Where They're Headed,”
Boston Globe
, Jun. 4, 2006, p. D1; telephone interview with Henry Harteveldt, vice president and principal analyst at Forrester Research, Oct. 6, 2006; Bob Tedeschi, “An Insurance Policy for Low Airfares,”
N.Y. Times
, Jan. 22, 2007, p. C10.

Home price predictions:
Marilyn Lewis, “Putting Home-Value Tools to the Test,” MSN Money, www.moneycentral.msn.com/content/Banking/Home financing/P150627.asp.

Accenture price predictions:
Daniel Thomas, “Accenture Helps Predict the Unpredictable,”
Fin. Times
, Jan. 24, 2006, p. 6; Rayid Ghani and Hillery Simmons, “Predicting the End-Price of Online Auctions,” ECML Workshop Paper (2004), available at http://www.accenture.com/NR/exeres/FO469E82-E904-4419-B34F-88D4BA53E88E.htm; telephone interview with Rayid Ghani, Accenture Labs researcher, Oct. 12, 2006.

Catching a cell phone thief:
Ian Ayres, “Marketplace Radio Commentary: Cellphone Sleuth,” Aug. 20, 2004; Ian Ayres and Barry Nalebuff, “Stop Thief!”
Forbes
, Jan. 10, 2005, p. 88.

Counterterrorism social network analyis:
Patrick Radden Keefe, “Can Network Theory Thwart Terrorists?”
N.Y. Times Magazine
, Mar. 13, 2006, p. 16; Valdis Krebs, “Social Network Analysis of the 9-11 Terrorist Network,” 2006, http://orgnet.com/hijackers.htm; “Cellular Phone Had Key Role,”
N.Y. Times
, Aug. 16, 1993, p. C11.

Beating the magic number scams:
Allan T. Ingraham, “A Test for Collusion Between a Bidder and an Auctioneer in Sealed-Bid Auctions,” 4
Contributions to Econ. Analysis and Pol'y
, Article 10 (2005).

CHAPTER 2

Fisher proposes randomization:
Ronald Fisher,
Statistical Methods for Research Workers
(1925); Ronald Fisher,
The Design of Experiments
(1935).

CapOne Super Crunching:
Charles Fishman, “This Is a Marketing Revolution,” 24
Fast Company
204 (1999), www.fastcompany.com/online/24/ capone.htm.

Point shaving in college basketball:
Justin Wolfers, “Point Shaving: Corruption in NCAA Basketball,” 96
Am. Econ. Rev.
279 (2006); David Leonhardt, “Sad Suspicions About Scores in Basketball,”
N.Y. Times
, Mar. 8, 2006.

Other Super Crunching companies:
Haiyan Shui and Lawrence M. Ausubel, “Time Inconsistency in the Credit Card Market,” 14th Ann. Utah Winter Fin. Conf. (May 3, 2004), http://ssrn.com/abstract=586622; Marianne Bertrand et al., “What's Psychology Worth? A Field Experiment in the Consumer Credit Market,” Nat'l Bureau of Econ. Research, Working Paper No. 11892 (2005).

Monster.com randomizes:
“Monster.com Scores Millions: Testing Increases Revenue Per Visitor,” Offermatica, http://www.offermatica.com/stories-1.7.htm (last visited Mar. 1, 2007).

Randomizing ads for Jo-Ann Fabrics:
“Selling by Design: How Joann.com Increased Category Conversions by 30% and Average Order Value by 137%,” Offermatica, http://www.offermatica.com/learnmore-1.2.5.2.htm (last visited Mar. 1, 2007).

Randomization goes non-profit:
Dean Karlan and John A. List, “Does Price Matter in Charitable Giving? Evidence from a Large-Scale Natural Field Experiment,” Yale Econ. Applications and Pol'y Discussion Paper No. 13, 2006, http://ssrn.com/abstract=903817; Armin Falk, “Charitable Giving as a Gift Exchange: Evidence from a Field Experiment,” CEPR Discussion Paper No. 4189, 2004, http://ssrn.com/abstract=502021.

Continental copes with transportation events:
Keith Ferrell, “Teradata QandA: Continental—Landing the Right Data,”
Teradata Magazine
, Jun. 2005, http://www.teradata.com/t/page/133723/index.htm.

Amazon apologizes:
Press release, Amazon.com, “Amazon.com Issues Statement Regarding Random Price Testing,” Sep. 27, 2000, http://phx. corporate-ir.net/phoenix.zhtml?c=97664andp=irol-newsArticle_PrintandID= 229620andhighlight=.

CHAPTER 3

The $5 million thesis:
David Greenberg et al.,
Social Experimentation and Public Policy-making
(2003). For more on income maintenance experiments, see Gary Burtless, “The Work Response to a Guaranteed Income: A Survey of Experimental Evidence 28,” in
Lessons from the Income Maintenance Experiments
, Alicia H. Munnell, ed. (1986); Government Accountability Office, Rep. No. HRD 81-46, “Income Maintenance Experiments: Need to Summarize Results and Communicate Lessons Learned” 15 (1981); Joseph P. Newhouse,
Free for All?: Lessons from the Rand Health Insurance Experiment
(1993); Family Support Act of 1988, Pub. L. No. 100–485, 102 Stat. 2343 (1988).

Super Crunching in the code and the UI tests:
Omnibus Budget Reconciliation Act of 1989, Pub. L. No. 101–239, § 8015, 103 Stat. 2470(1989); Bruce D. Meyer, “Lessons From the U.S. Unemployment Insurance Experiments,” 33
J. Econ. Literature
91 (1995).

Search-assistance regressions:
Peter H. Schuck and Richard J. Zeckhauser,
Targeting in Social Programs: Avoiding Bad Bets, Removing Bad Apples
(2006).

Alternatives to job-search assistance:
Instead of job-search assistance, other states tested whether reemployment bonuses could be effective in shortening the period of unemployment. These bonuses were essentially bribes for people to find work faster. A random group of the unemployed would be paid between $500 and $1,500 (between three and six times the weekly UI benefit) if they could find a job fast. The reemployment bonuses, however, were not generally successful in reducing the government's overall UI expenditures. The amount spent on the bonuses and administering the program often was larger than the amount saved in shorter unemployment spells. Illinois also tested whether it would be more effective to give the bonus to the employer or to the employee. Marcus Stanley et al.,
Developing Skills: What We Know About the Impacts of American Employment and Training Programs on Employment, Earnings, and Educational Outcomes
, October 1998 (working paper).

Good control groups needed:
Susan Rose-Ackerman, “Risk Taking and Reelection: Does Federalism Promote Innovation?” 9
J. Legal Stud.
593 (1980).

Other randomized studies that are impacting real-world decisions:
The randomized trial has been especially effective at taking on the most intransigent and entrenched problems, like cocaine addiction. A series of randomized trials has shown that paying cocaine addicts to show up for drug treatment increases the chance that they'll stay clean. Offering the addicts lotteries is an even cheaper way to induce the same result. Rather than paying addicts a fixed amount for staying clean, participants in the lottery studies (who showed up and provided a clean urine sample) earned the chance to draw a slip of paper from a bowl. The paper would tell the addict the size of a prize varying from $1 to $100 (a randomized test about a randomized lottery). See Todd A. Olmstead et al., “Cost-Effectiveness of Prize-Based Incentives for Stimulant Abusers in Outpatient Psychosocial Treatment Programs,”
Drug and Alcohol Dependence
(2006), http://dx.doi.org/10.1016/j.drugalcdep.2006.08.012.

Other books

Devil in the Delta by Rich Newman
Charlotte's Web by E. B. White
Balefire by Barrett
In Perpetuity by Ellis Morning
Gypsy Spirits by Marianne Spitzer