Super Crunchers (6 page)

Read Super Crunchers Online

Authors: Ian Ayres

BOOK: Super Crunchers
10.44Mb size Format: txt, pdf, ePub

Google has developed a Personalized Search feature that uses your past search history to further refine what you really have in mind. If Bill Gates and Martha Stewart both Google “blackberry,” Gates is more likely to see web pages about the email device at the top of his results list, while Stewart is more likely to see web pages about the fruit. Google is pushing this personalized data mining into almost every one of its features. Its new web accelerator dramatically speeds up access to the Internet—not by some breakthrough in hardware or software technology—but by predicting what you are going to want to read next. Google's web accelerator is continually pre-picking web pages from the net. So while you're reading the first page of an article, it's already downloading pages two and three. And even before you fire up your browser tomorrow morning, simple data mining helps Google predict what sites you're going to want to look at (hint: it's probably the same sites that you look at most days).

Yahoo! and Microsoft are desperately trying to play catch-up in this analytic competition. Google has deservedly become a verb. I'm frankly in awe of how it has improved my life. Nonetheless, we Internet users are fickle friends. The search engine that can best guess what we're really looking for is likely to win the lion's share of our traffic. If Microsoft or Yahoo! can figure out how to outcrunch Google, they will very quickly take its place. To the Super Crunching victor go the web traffic spoils.

Guilt by Association

The granddaddy of all of Google's Super Crunching is its vaunted PageRank. Among all the web pages that include the word “kumquat,” Google will rank a page higher if it has more web pages that are linking to it. To Google, every link to a page is a kind of vote for that web page. And not all votes are equal. Votes cast by web pages that are themselves important are weighted more heavily than links from web pages that have low PageRanks (because no one else links to them).

Google found that web pages with higher PageRanks were more likely to contain the information that users are actually seeking. And it's very hard for users to manipulate their own PageRank. Merely creating a bunch of new web pages that link to your home page won't work because only links from web pages that themselves have reasonably high PageRanks will have an impact. And it's not so easy to create web pages that other sites will actually link to.

The PageRank system is a form of what webheads call “social network analysis.” It's a good kind of guilt by association. Social network analysis can also be used as a forensic tool by law enforcement to help identify actual bad guys.

I've used this kind of data mining myself.

A couple of years ago, my cell phone was stolen. I hopped on the Internet and downloaded the record of telephone calls that were made both to and from my phone. This is where network analysis came into play. The thief made more than a hundred calls before my service was cut off. Yet most of the calls were to or from just a few phone numbers. The thief made more than thirty calls to one phone number, and that phone number had called into the phone several times as well. When I called that number, a voice mailbox told me that I'd reached Jessica's cell phone. The third most frequent number connected me with Jessica's mother (who was rather distraught to learn that her daughter had been calling a stolen phone).

Not all the numbers were helpful. The thief had called a local weather recording a bunch of times. By the fifth call, however, I found someone who said he'd help me get my phone back. And he did. A few hours later, he handed it back to me at a McDonald's parking lot. Just knowing the telephone numbers that a bad guy calls can help you figure out who the bad guy is. In fact, cell phone records were used in just this way to finger the two men who killed Michael Jordan's father.

This kind of network analysis is also behind one of our nation's efforts to smoke out terrorists.
USA Today
reported that the National Security Agency has been amassing a database with the records of two trillion telephone calls since 2001. We're talking thousands of terabytes of information. By finding out who “people of interest” are calling, the NSA may be able to identify the players in a terrorist network and the structure of the network itself.

Just like I used the pattern of phone records to identify the bad guy who stole my phone, Valdis Krebs used network analysis of public information to show that all nineteen of the 9/11 hijackers were within two email or phone call connections to two al-Qaeda members whom the CIA already knew about before the attack. Of course, it's a lot easier to see patterns after the fact, but just knowing a probable bad guy may be enough to put statistical investigators on the right track.

The 64,000-terabyte question is whether it's possible to start with just a single suspect and credibly identify a prospective conspiracy based on an analysis of social network patterns. The Pentagon is understandably not telling whether its data-mining contractors—which include our friend Teradata—have succeeded or not. Still, my own experience as a forensic economist working to smoke out criminal fraud makes me more sanguine that Super Crunching may prospectively contribute to homeland security.

Looking for Magic Numbers

A few years ago, Peter Pope, who was then the inspector general of the New York City School Construction Authority, called me and asked for help. The Construction Authority was spending about a billion dollars a year in a ten-year plan to renovate New York City schools. Many of the schools were in terrible disrepair and a lot of the money was being used on “envelope” work—roof and exterior repairs to maintain the integrity of the shell of the building. New York City had a long and sordid history of construction corruption and bid rigging, so the New York state legislature had created a new position of inspector general to put an end to inflated costs and waste.

Peter was a recent law grad who was interested in doing a very different kind of public interest law. Making sure that construction auctions and contract change-orders are on the up-and-up is not as glamorous as taking on a death penalty case or making a Supreme Court oral argument, but Peter was trying to make sure that thousands of schoolchildren had a decent place to go to school. He and his staff were literally risking their lives. Organized crime is not happy when someone comes in and rocks their boat. Once Peter was on the scene, nothing was business as usual.

Peter called me because he had discovered a specific type of fraud that had been taking place in some of his construction auctions. He called it the “magic number” scam.

During the summer of 1992, Elias Meris, the principal owner of the Meris Construction Corporation, was under investigation by the Internal Revenue Service. Meris agreed, in exchange for IRS leniency, to wear a wire and provide information on a bid-rigging scam involving School Construction Authority employees and other contractors. Working undercover for prosecutors, Meris taped conversations with senior project officer John Dransfield and a contract specialist named Mark Parker.

The contract specialist is the person who publicly opens the sealed bids of contractors one at a time at a procurement auction and reads out loud the price that a contractor has bid.

In the “magic number” scam, the bribing bidder would submit a sealed bid with the absolute lowest price at which it would be willing to do the project. At the public bid openings, Parker would save the dishonest contractor's bid for last and, knowing the current low bid, he would read aloud a false bid just below this price, so that the briber would win but would be paid just slightly less than the bidder who honestly should have won. Then Dransfield would use Wite-Out to doctor the briber's bid—writing in the amount that Parker had read out loud. (If the lowest real bid turned out to be below the lowest amount at which the dishonest bidder wanted the job, the contract specialist wouldn't use the Wite-Out and would just read the dishonest bidder's written bid.) This “magic number” scam allowed dishonest bidders to win the contract whenever they were willing to do the job for less than the lowest true bid, but they would be paid the highest possible price.

Pope's investigation eventually implicated in the scam eleven individuals within seven contracting firms. Next time you're considering renovating your New York pied-à-terre, you might want to avoid Christ Gatzonis Electrical Contractor Inc., GTS Contracting Corp., Batex Contracting Corp., American Construction Management Corp., Wolff & Munier Inc., Simins Falotico Group, and CZK Construction Corp. These seven firms used the “magic number” scam to win at least fifty-three construction auctions with winning bids totaling over twenty-three million dollars.

Pope found these bad guys, but he called me to see if statistical analysis could point the finger toward other examples of “magic number” fraud. Together with auction guru Peter Cramton and a talented young graduate student named Alan Ingraham, we ran regressions to see if particular contract specialists were cheating.

This is really looking for needles in a haystack. It is doubtful that a specialist would rig all of his auctions. The key for us was to look for auctions where the difference between the lowest and second-lowest bid was unusually small. Using statistical regressions that controlled for a host of other variables—including the number of bidders and an engineer's pre-auction estimate of cost as well as the third-lowest bid placed in the auction—Alan Ingraham identified two new contract specialists who presided over auctions where there was a disturbingly small difference between the winning and the second-lowest bid. Without knowing even the names of the contract specialists (the inspector general's data referred to them by number only), we were able to point the inspector general's office in a new direction. Alan turned the work into two chapters of his doctoral dissertation. While the results of the inspector general's investigation are confidential, Peter was deeply appreciative and earlier this year thanked me for “helping us catch two more crooks.”

This “magic number” story shows how Super Crunching can reveal the past. Super Crunching also can predict what you will want and what you will do. The stories of eHarmony and Harrah's, magic numbers, and Farecast are all stories of how regressions have slipped the bounds of academia and are being used to predict all kinds of things.

The regression formula is “plug and play”—plug in the specified attributes and,
voilà,
out pops your prediction. Of course, not all predictions are equally valuable. A river can't rise above its source and regression predictions can't overcome insufficient data. If your dataset is too small, no regression in the world is going to make very accurate predictions. Still, unlike intuitivists, regressions know their own limitations and can answer Ed Koch's old campaign question, “How Am I Doing?”

CHAPTER 2

Creating Your Own Data with the Flip of a Coin

In 1925, Ronald Fisher, the father of modern statistics, formally proposed using random assignments to test whether particular medical interventions had some predicted effect. The first randomized trial on humans (of an early antibiotic against tuberculosis) didn't take place until the late 1940s. But now, with the encouragement of the Food and Drug Administration, randomized tests have become the gold standard for proving whether or not medical treatments are efficacious.

This chapter is about how business is playing catch-up. Smart businesses know that regression equations can help them make better predictions. But for the first time, we're also starting to see businesses combine regression predictions with predictions based on their own randomized trials. Businesses are starting to go out and create their own data by flipping coins. We'll see that randomized testing is becoming an important tool for data-driven decision making. Like the new regression studies, it's Super Crunching to answer the bottom-line questions of what works. The poster child for the power of combining these two core Super Crunching tools is a company that made famous the question “What's in Your Wallet?”

Capital One, one of the nation's largest issuers of credit cards, has been at the forefront of the Super Crunching revolution. More than 2.5 million people call CapOne each month. And they're ready for your call.

When you call CapOne, a recording immediately prompts you to enter your card number. Even before the service representative's phone rings, a computer algorithm kicks in and analyzes dozens of characteristics about the account and about you, the account holder. Super Crunching sometimes lets them answer your question even before you ask it.

CapOne found that some customers call each month just to find out their balance or to see whether their payment has arrived. The computer keeps track of who makes these calls, and routes them to an automated system that answers the phone this way: “The amount now due on your account is $164.27. If you have a billing question, press 1….” Or: “Your last payment was received on February 9. If you need to speak with a customer-service representative, press 1….” A phone call that might have taken twenty or thirty seconds, or even a minute, now lasts just ten seconds. Everyone wins.

Super Crunching also has transformed customer service calls into a sales opportunity. Data analysis of customer characteristics generates a list of products and services that this kind of consumer is most willing to buy, and the service rep sees the list as soon as she takes the call. It's just like Amazon's “customers who like this, also like this” feature, but transmitted through the rep. Capital One now makes more than a million sales a year through customer-service marketing—and their data-mining predictions are the big reason why. Again, everybody wins.

But maybe not equally. CapOne gives itself the lion's share of the gains whenever possible. For example, a statistically validated algorithm kicks in whenever a customer tries to cancel her account. If the customer is not so valuable, she is routed to an automated service where she can press a few buttons and cancel. If the customer has been (or is predicted to be) profitable, the computer routes her to a “retention specialist” and generates a list of sweeteners that can be offered.

When Nancy from North Carolina called to close her account because she felt her 16.9 percent interest rate was too high, CapOne routed her call to a retention specialist named Tim Gorman. CapOne's computer automatically showed Tim a range of three lower interest rates—ranging from 9.9 percent to 12.9 percent—that he could offer to keep Nancy's business.

When Nancy claimed on the phone that she just got a card with a 9.9 percent rate, Tim responded with “Well, ma'am, I could lower your rate to 12.9 percent.” Because of Super Crunching, CapOne knows that a lot of people will be satisfied with this reduction (even when they say they've been offered a lower rate from another card). And when Nancy accepts the offer, Tim gets an immediate bonus. Everyone wins. But because of data mining, CapOne wins a bit more.

CapOne Rolls the Dice

What really sets CapOne apart is its willingness to literally experiment. Instead of being satisfied with a historical analysis of consumer behavior, CapOne proactively intervenes in the market by running randomized experiments.

In 2006, it ran more than 28,000 experiments—28,000 tests of new products, new advertising approaches, and new contract terms.

Is it more effective to print on the outside envelope “LIMITED TIME OFFER” or “2.9 Percent Introductory Rate!”? CapOne answers this question by randomly dividing prospects into two groups and seeing which approach has the highest success rate.

It seems too simple. Yet having a computer flip a coin and treating prospects who come up heads differently than the ones who come up tails is the core idea behind one of the most powerful Super Crunching techniques ever devised.

When you rely on historical data, it is much harder to tease out causation. A miner of historical data who wants to find out whether chemotherapy worked better than radiation would need to control for everything else, such as patient attributes, environmental factors—really anything that might affect the outcome. In a large random study, however, you don't need these controls. Instead of controlling for whether the patients smoked or had a prior stroke, we can trust that in a large randomized division, about the same proportion of smokers will show up in each treatment type.

The sample size is the key. If we get a large enough sample, we can be pretty sure that the group coming up heads will be statistically identical to the group coming up tails. If we then intervene to “treat” the heads differently, we can measure the pure effect of the intervention. Super Crunchers call this the “treatment effect.” It's the causal holy grail of number crunching: after randomization makes the two groups identical on every other dimension, we can be confident that any change in the two groups' outcome was
caused
by their different treatment.

CapOne has been running randomized tests for a long time. Way back in 1995, it ran an even larger experiment by generating a mailing list of 600,000 prospects. It randomly divided this pool of people into groups of 100,000 and sent each group one of six different offers that varied the size and duration of the teaser rate. Randomization let CapOne create two types of data. Initially the computerized coin flip was itself a type of data that CapOne created and then relied upon to decide whether to assign a prospect to a particular group. More importantly, the response of these groups was new data that only existed because the experiment artificially perturbed the status quo. Comparing the average response rate of these statistically similar groups let CapOne see the impact of making different offers. Because of this massive study, CapOne learned that offering a teaser rate of 4.9 percent for six months was much more profitable than offering a 7.9 percent rate for twelve months.

Academics have been running randomized experiments inside and outside of medicine for years. But the big change is that businesses are relying on them to reshape corporate policy. They can see what works best and immediately change their corporate strategy. When an academic publishes a paper showing that there's point shaving in basketball, nothing much changes. Yet when a business invests tens of thousands of dollars on a randomized test, they're doing it because they're willing to be guided by the results.

Other companies are starting to get in on the act. In South Africa, Credit Indemnity is one of the largest micro-lenders, with over 150 branches throughout the country. In 2004, it used randomized trials to help market its “cash loans.” Like payday loans in the U.S., cash loans are short-term, high-interest credit for the “working poor.” These loans are big business in South Africa, where at any time as many as 6.6 million people borrow. The typical loan is only R1000 ($150), about a third of the borrower's monthly income.

Credit Indemnity sent out more than 50,000 direct-mail solicitations to former customers. Like CapOne's mailings, these solicitations offered random interest rates that varied from 3.25 percent to 11.75 percent. As an economist, it was comforting to learn from Credit Indemnity's experiment that yes, there was larger demand for lower priced loans.

Still, price wasn't everything. What was really interesting about the test is that Credit Indemnity simultaneously randomized other aspects of the solicitations. The bank learned that simply adding a photo of a smiling woman in the corner of the solicitation letter raised the response rate of male customers by as much as dropping the interest rate 4.5 percentage points. They found an even bigger effect when they had a marketing research firm call the client a week before the solicitation and simply ask questions: “Would you mind telling us if you anticipate making large purchases in the next few months, things like home repairs, school fees, appliances, ceremonies (weddings, etc.), or even paying off expensive debt?”

Talk about your power of suggestion. Priming people with a pleasant picture or bringing to mind their possible need for a loan in a non-marketing context dramatically increased their likelihood of responding to the solicitation.

How do we know that the picture or the phone call really caused the higher response rate? Again, the answer is coin flipping. Randomizing over 50,000 people makes sure that, on average, those shown pictures and those not shown pictures were going to be pretty much the same on every other dimension. So any differences in the average response rate between the two groups must be caused by the difference in their treatment.

Of course, randomization doesn't mean that those who were sent photos are each exactly the same as those who were not sent photos. If we looked at the heights of people who received photo solicitations, we would see a bell curve of heights. The point is that we would see the same bell curve of heights for those who received photos without solicitations. Since the
distribution
of both groups becomes increasingly identical as the sample size increases, then we can attribute any differences in the
average
group response to the difference in treatment.

In lab experiments, researchers create data by carefully controlling for everything to create matched pairs that are identical except for the thing being tested. Outside of the lab, it's sometimes simply impossible to create pairs that are the same on all the peripheral dimensions. Randomization is how businesses can create data without creating perfectly matched individual pairs. The process of randomization instead creates matched distributions. Randomization thus allows Super Crunchers to run the equivalent of a controlled test without actually having to laboriously match up and control for dozens or hundreds of potentially confounding variables.

The implications of the randomized marketing trials for lending profitability are pretty obvious. Instead of dropping the interest rate five percentage points, why not simply include a picture? When Credit Indemnity learned the results of the study, they were about to start doing just that. But shortly after the tests were analyzed, the bank was taken over. The new bank not only shut down future testing, it also laid off tons of the Credit Indemnity employees—including those who had been the strongest proponents of testing. Ironically, some of these former employees have taken the lessons of the testing to heart and are now implementing the results in their new jobs working for Credit Indemnity's competitors.

You May Be Watching a Random Web Page

Testing with chance isn't limited to banks and credit companies; indeed, Offermatica.com has turned Internet randomization into a true art form. Two brothers, Matt and James Roche, started Offermatica in 2003 to capitalize on the ease of randomizing on the Internet. Matt is its CEO and James works as the company's president. As their name suggests, Offermatica has automated the testing of offers. Want to know whether one web page design works better than another? Offermatica will set up software so that when people click on your site, either one page or the other will be sent at random. The software then can tell you, in real time, which page gets more “click throughs” and which generates more purchases.

What's more, they can let you conduct multiple tests at once. Just as Credit Indemnity randomly selected the interest rate and independently decided whether or not to include a photo, Offermatica can randomize over multiple dimensions of a web page's design.

For example, Monster.com wanted to test seven different elements of the employers' home pages. They wanted to know things like whether a link should say “Search and Buy Resumes” or just “Search Resumes” or whether there should be a “Learn More” link or not. All in all, Monster had 128 different page permutations that it wanted to test. But by using the “Taguchi Method,” Offermatica was able to test just eight “recipe” pages and still make accurate predictions about how the untested 120 other web pages would fare.
*1

Other books

Fade into Always by Kate Dawes
Dark Tales 1 by Viola Masters
Love's Illusions: A Novel by Cazzola, Jolene
Hot for Him by Amy Armstrong
Poems That Make Grown Men Cry by Anthony and Ben Holden
The October Country by Ray Bradbury
Imaginary LIves by Schwob, Marcel
Murder Bone by Bone by Lora Roberts
Dead South Rising: Book 1 by Lang, Sean Robert