Read Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy Online
Authors: Cathy O'Neil
Tags: #Business & Economics, #General, #Social Science, #Statistics, #Privacy & Surveillance, #Public Policy, #Political Science
Math, in the form of complex models, fuels the predatory advertising that brings in prospects for these colleges. But by the time a recruiter is hounding prospective students on their cell phones, we’ve left the world of numbers behind. The sales pitches, with their promises of affordable tuition, bright career prospects, and upward mobility, aren’t that different from the promotions for magic elixirs, baldness cures, and vibrating belts that reduce waistline fat. They’re not new.
Yet a crucial component of a WMD is that it is damaging to many people’s lives. And with these types of predatory ads, the damage doesn’t begin until students start taking out big loans for their tuition and fees.
The crucial metric is
the so-called 90-10 rule, included in the Higher Education Act of 1965. It stipulates that colleges cannot get more than 90 percent of their funding from federal aid. The
thinking was that as long as the students had some “skin in the game” they would tend to take their education more seriously. But for-profit colleges quickly worked this ratio into their business plan. If students could scrape together a few thousand dollars, either from savings or bank loans, the universities could line them up for nine times that sum in government loans, making each student incredibly profitable.
To many of the students, the loans sound like free money, and the school doesn’t take pains to correct this misconception. But it is debt, and many of them quickly find themselves up to their necks in it. The outstanding debt for students at the bankrupt
Corinthian Colleges amounted to $3.5 billion. Almost all of it was backed by taxpayers and will never be repaid.
Some people no doubt attend for-profit colleges and emerge with knowledge and skills that serve them well. But do they fare better than graduates from community colleges, whose degrees cost a fraction as much? In 2014,
investigators at CALDER/American Institutes for Research created nearly nine thousand fictitious résumés. Some of their fake job applicants held associate degrees from for-profit universities, others had similar diplomas from community colleges, while a third group had no college education at all. The researchers sent their résumés to job postings in seven major cities and then measured the response rate. They found that diplomas from for-profit colleges were worth less in the workplace than those from community colleges and about the same as a high school diploma. And yet these colleges cost on average 20 percent more than flagship public universities.
The feedback loop for this WMD is far less complicated than it is nefarious. The poorest 40 percent of the US population is in desperate straits. Many industrial jobs have disappeared, either replaced by technology or shipped overseas. Unions have lost their punch.
The top 20 percent of the population controls 89 percent
of the wealth in the country, and the bottom 40 percent controls none of it. Their assets are negative: the average household in this enormous and struggling underclass has a net debt of $14,800, much of it in extortionate credit card accounts. What these people need is money. And the key to earning more money, they hear again and again, is education.
Along come the for-profit colleges with their highly refined WMDs to target and fleece the population most in need. They sell them the promise of an education and a tantalizing glimpse of upward mobility—while plunging them deeper into debt. They take advantage of the pressing need in poor households, along with their ignorance and their aspirations, then they exploit it. And they do this at great scale. This leads to hopelessness and despair, along with skepticism about the value of education more broadly, and it exacerbates our country’s vast wealth gap.
It’s worth noting that these diploma mills drive inequality in both directions. The presidents of the leading for-profit universities make millions of dollars every year. For example,
Gregory W. Cappelli, CEO of Apollo Education Group, the parent company of the University of Phoenix, took home $25.1 million in total compensation in 2011. At public universities, which have their own distortions, only football and basketball coaches can hope to make that much.
For-profit colleges, sadly, are hardly alone in deploying predatory ads. They have plenty of company. If you just think about where people are hurting, or desperate, you’ll find advertisers wielding their predatory models. One of the biggest opportunities, naturally, is for loans. Everyone needs money, but some more urgently than others. These people are not hard to find. The neediest are far more likely to reside in impoverished zip codes. And from a
predatory advertiser’s perspective, they practically shout out for special attention with their queries on search eng
ines and their clicks on coupons.
Like for-profit colleges, the payday loan industry operates WMDs. Some of them are run by legal operations, but the industry is fundamentally predatory, charging outrageous
interest rates that average 574 percent on short-term loans that are flipped on average eight times—making them much more like long-term loans. They are critically supported by legions of data brokers and lead generators, many of them scam artists. Their advertisements pop up on computers and phones, offering fast access to cash. When the prospects fill out the applications, often including their bank information, they open themselves to theft and abuse.
In 2015, the Federal Trade Commission charged two data brokers for selling the loan applications of more than half a million consumers. According to the suit, the companies, Sequoia One of Tampa, Florida, and Gen X Marketing Group of nearby Clearwater, made off with customers’ phone numbers, employer details, social security numbers, and bank account information—and then sold them for about fifty cents each. The companies that bought the information, according to the regulators, raided the consumers’ bank accounts for “at least” $7.1 million. Many of the victims were subsequently charged bank fees for emptying out their account or bouncing checks.
If you think about the numbers involved, they’re almost pathetically low. Spread over a half million accounts, $7.1 million comes to barely $14 each. Even if the thieves failed to access many of these accounts, much of the money they stole was no doubt in small numbers, the last $50 or $100 that some poor people keep in their accounts.
Now regulators are pushing for new laws governing the market for personal data—a crucial input for all sorts of WMDs. To date,
a couple of federal laws, such as the Fair Credit Reporting Act and the Health Insurance Portability and Accountability Act, or HIPAA, establish some limits on health and credit data. Maybe, with an eye on lead generators, they’ll add more.
However, as we’ll see in coming chapters, some of the most effective and nefarious WMDs manage to engineer work-arounds. They study everything from neighborhoods to Facebook friends to predict our behavior—and even lock us up.
The small city of Reading, Pennsylvania, has had a tough go of it in the postindustrial era. Nestled in the green hills fifty miles west of Philadelphia, Reading grew rich on railroads, steel, coal, and textiles. But in recent decades, with all of those industries in steep decline, the city has languished. By 2011, it had
the highest poverty rate in the country, at 41.3 percent. (The following year, it was surpassed, if barely, by Detroit.) As the recession pummeled Reading’s economy following the 2008 market crash, tax revenues fell, which led to a cut of forty-five officers in the police department—despite persistent crime.
Reading police chief William Heim had to figure out how to
get the same or better policing out of a smaller force. So in 2013 he invested in
crime prediction software made by PredPol, a Big Data start-up based in Santa Cruz, California. The program processed historical crime data and calculated, hour by hour, where crimes were most likely to occur. The Reading policemen could view the program’s conclusions as a series of squares, each one just the size of two football fields. If they spent more time patrolling these squares, there was a good chance they would discourage crime. And sure enough, a year later, Chief Heim announced that burglaries were down by 23 percent.
Predictive programs like PredPol are all the rage in budget-strapped police departments across the country. Departments from Atlanta to Los Angeles are deploying cops in the shifting squares and reporting falling crime rates. New York City uses a similar program, called CompStat. And
Philadelphia police are using a local product called HunchLab that includes risk terrain analysis, which incorporates certain features, such as ATMs or convenience stores, that might attract crimes. Like those in the rest of the Big Data industry, the developers of crime prediction software are hurrying to incorporate any information that can boost the accuracy of their models.
If you think about it, hot-spot predictors are similar to the shifting defensive models in baseball that we discussed earlier. Those systems look at the history of each player’s hits and then position fielders where the ball is most likely to travel. Crime prediction software carries out similar analysis, positioning cops where crimes appear most likely to occur. Both types of models optimize resources. But a number of the crime prediction models are more sophisticated, because they predict progressions that could lead to waves of crime. PredPol, for example, is
based on seismic software: it looks at a crime in one area, incorporates it into historical patterns, and predicts when and where it might occur next.
(One simple correlation it has found: if burglars hit your next-door neighbor’s house, batten down the hatches.)
Predictive crime models like PredPol have their virtues. Unlike the crime-stoppers in Steven Spielberg’s dystopian movie
Minority Report
(and some ominous real-life initiatives, which we’ll get to shortly), the cops don’t track down people before they commit crimes.
Jeffrey Brantingham, the UCLA anthropology professor who founded PredPol, stressed to me that the model is blind to race and ethnicity. And unlike other programs, including the recidivism risk models we discussed, which are used for sentencing guidelines, PredPol doesn’t focus on the individual. Instead, it targets geography. The key inputs are the type and location of each crime and when it occurred. That seems fair enough. And if cops spend more time in the high-risk zones, foiling burglars and car thieves, there’s good reason to believe that the community benefits.
But most crimes aren’t as serious as burglary and grand theft auto, and that is where serious problems emerge. When police set up their PredPol system, they have a choice. They can focus exclusively on so-called Part 1 crimes. These are the violent crimes, including homicide, arson, and assault, which are usually reported to them. But they can also broaden the focus by including Part 2 crimes, including vagrancy, aggressive panhandling, and selling and consuming small quantities of drugs. Many of these “nuisance” crimes would go unrecorded if a cop weren’t there to see them.
These nuisance crimes are endemic to many impoverished neighborhoods. In some places police call them antisocial behavior, or ASB. Unfortunately, including them in the model threatens to skew the analysis. Once the nuisance data flows into a predictive model, more police are drawn into those neighborhoods, where they’re more likely to arrest more people. After all, even if their
objective is to stop burglaries, murders, and rape, they’re bound to have slow periods. It’s the nature of patrolling. And if a patrolling cop sees a couple of kids who look no older than sixteen guzzling from a bottle in a brown bag, he stops them. These types of low-level crimes populate their models with more and more dots, and the models send the cops back to the same neighborhood.
This creates a pernicious feedback loop. The policing itself spawns new data, which justifies more policing. And our prisons fill up with hundreds of thousands of people found guilty of victimless crimes. Most of them come from impoverished neighborhoods, and most are black or Hispanic. So even if a model is color blind, the result of it is anything but. In our largely segregated cities, geography is a highly effective proxy for race.
If the purpose of the models is to prevent serious crimes, you might ask why nuisance crimes are tracked at all. The answer is that the link between antisocial behavior and crime has been an article of faith since 1982, when a
criminologist named George Kelling teamed up with a public policy expert, James Q. Wilson, to write a seminal article in the
Atlantic Monthly
on so-called broken-windows policing. The idea was that low-level crimes and misdemeanors created an atmosphere of disorder in a neighborhood. This scared law-abiding citizens away. The dark and empty streets they left behind were breeding grounds for serious crime. The antidote was for society to resist the spread of disorder. This included fixing broken windows, cleaning up graffiti-covered subway cars, and taking steps to discourage nuisance crimes.