Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (15 page)

Read Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy Online

Authors: Cathy O'Neil

Tags: #Business & Economics, #General, #Social Science, #Statistics, #Privacy & Surveillance, #Public Policy, #Political Science

BOOK: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
7.86Mb size Format: txt, pdf, ePub

One of the people on the list,
a twenty-two-year-old high school dropout named Robert McDaniel, answered his door one summer day in 2013 and found himself facing a police officer. McDaniel later told the
Chicago Tribune
that he had no history of gun violations and had never been charged with a violent crime. Like most of the young men in Austin, his dangerous West Side neighborhood, McDaniel had had brushes with the law, and he knew plenty of people caught up in the criminal justice system. The policewoman, he said, told him that the force had its eye on him and to watch out.

Part of the analysis that led police to McDaniel involved his social network. He knew criminals. And there is no denying that people are statistically more likely than not to behave like the people they spend time with. Facebook, for example, has found that friends who communicate often are far more likely to click on the same advertisement. Birds of a feather, statistically speaking,
do
fly together.

And to be fair to Chicago police, they’re not arresting people like Robert McDaniel, at least not yet. The goal of the police in this exercise is to save lives. If the four hundred people who appear most likely to commit violent crimes receive a knock on the door and a warning, maybe some of them will think twice before packing a gun.

But let’s consider McDaniel’s case in terms of fairness. He hap
pened to grow up in a poor and dangerous neighborhood. In this, he was unlucky. He has been surrounded by crime, and many of his acquaintances have gotten caught up in it. And largely because of these circumstances—and not his own actions—he has been deemed dangerous. Now the police have their eye on him. And if he behaves foolishly, as millions of other Americans do on a regular basis, if he buys drugs or gets into a barroom fight or carries an unregistered handgun, the full force of the law will fall down on him, and probably much harder than it would on most of us. After all, he’s been warned.

I would argue that the model that led police to Robert McDaniel’s door has the wrong objective. Instead of simply trying to eradicate crimes, police should be attempting to build relationships in the neighborhood. This was one of the pillars of the original “broken-windows” study. The cops were on foot, talking to people, trying to help them uphold their own community standards. But that objective, in many cases, has been lost, steamrollered by models that equate arrests with safety.

This isn’t the case everywhere. I recently visited Camden, New Jersey, which was the murder capital of the country in 2011. I found that the police department in Camden, rebuilt and placed under state control in 2012, had a dual mandate: lowering crime and engendering community trust. If building trust is the objective, an arrest may well become a last resort, not the first. This more empathetic approach could lead to warmer relations between the police and the policed, and fewer of the tragedies we’ve seen in recent years—the police killings of young black men and the riots that follow them.

From a mathematical point of view, however, trust is hard to quantify. That’s a challenge for people building models. Sadly, it’s far simpler to keep counting arrests, to build models that assume we’re birds of a feather and treat us as such. Innocent people
surrounded by criminals get treated badly, and criminals surrounded by a law-abiding public get a pass. And because of the strong correlation between poverty and reported crime, the poor continue to get caught up in these digital dragnets. The rest of us barely have to think about them.

 

A few years ago,
a young man named Kyle Behm took a leave from his studies at Vanderbilt University. He was suffering from bipolar disorder and needed time to get treatment. A year and a half later, Kyle was healthy enough to return to his studies at a different school. Around that time, he learned from a friend about a part-time job at Kroger. It was just a minimum-wage job at a supermarket, but it seemed like a sure thing. His friend, who was leaving the job, could vouch for him. For a high-achieving student like Kyle, the application looked like a formality.

But Kyle didn’t get called back for an interview. When he inquired, his friend explained to him that he had been “red-lighted”
by the personality test he’d taken when he applied for the job. The test was part of an employee selection program developed by Kronos, a workforce management company based outside of Boston. When Kyle told his father, Roland, an attorney, what had happened, his father asked him what kind of questions had appeared on the test. Kyle said that they were very much like
the “Five Factor Model” test, which he’d been given at the hospital. That test grades people for extraversion, agreeableness, conscientiousness, neuroticism, and openness to ideas.

At first, losing one minimum-wage job because of a questionable test didn’t seem like such a big deal. Roland Behm urged his son to apply elsewhere. But Kyle came back each time with the same news. The companies he was applying to were all using the same test, and he wasn’t getting offers. Roland later recalled: “Kyle said to me, ‘I had an almost perfect SAT and I was at Vanderbilt a few years ago. If I can’t get a part-time minimum-wage job, how broken am I?’ And I said, ‘I don’t think you’re that broken.’ ”

But Roland Behm was bewildered. Questions about mental health appeared to be blackballing his son from the job market. He decided to look into it and soon learned that the use of personality tests for hiring was indeed widespread among large corporations. And yet he found very few legal challenges to this practice. As he explained to me, people who apply for a job and are red-lighted rarely learn that they were rejected because of their test results. Even when they do, they’re not likely to contact a lawyer.

Behm went on to send notices to seven companies—Finish Line, Home Depot, Kroger, Lowe’s, PetSmart, Walgreen Co., and Yum Brands—informing them of his intent to file a class-action suit alleging that the use of the exam during the job application process was unlawful.

The suit, as I write this, is still pending. Arguments are likely to focus on whether the Kronos test can be
considered a medi
cal exam, the use of which in hiring is illegal under the Americans with Disabilities Act of 1990. If this turns out to be the case, the court will have to determine whether the hiring companies themselves are responsible for running afoul of the ADA, or if Kronos is.

The question for this book is how automatic systems judge us when we seek jobs and what criteria they evaluate. Already, we’ve seen WMDs poisoning the college admissions process, both for the rich and for the middle class. Meanwhile, WMDs in criminal justice rope in millions, the great majority of them poor, most of whom never had the chance to attend college at all. Members of each of these groups face radically different challenges. But they have something in common, too. They all ultimately need a job.

Finding work used to be largely a question of whom you knew. In fact, Kyle Behm was following the traditional route when he applied for work at Kroger. His friend had alerted him to the opening and put in a good word. For decades, that was how people got a foot in the door, whether at grocers, the docks, banks, or law firms. Candidates then usually faced an interview, where a manager would try to get a feel for them. All too often this translated into a single basic judgment: Is this person like me (or others I get along with)? The result was a lack of opportunity for job seekers without a friend inside, especially if they came from a different race, ethnic group, or religion. Women also found themselves excluded by this insider game.

Companies like Kronos brought science into corporate human resources in part to make the process fairer.
Founded in the 1970s by MIT graduates, Kronos’s first product was a new kind of punch clock, one equipped with a microprocessor, which added up employees’ hours and reported them automatically. This may sound banal, but it was the beginning of the electronic push (now blazing along at warp speed) to track and optimize a workforce.

As Kronos grew, it developed a broad range of software tools for workforce management, including a software program,
Workforce Ready HR, that promised to eliminate “the guesswork” in hiring, according to its web page: “We can help you screen, hire, and onboard candidates most likely to be productive—the best-fit employees who will perform better and stay on the job longer.”

Kronos is part of a burgeoning industry. The hiring business is automating, and many of the new programs include personality tests like the one Kyle Behm took. It is now a
$500 million annual business and is growing by 10 to 15 percent a year, according to Hogan Assessment Systems Inc., a testing company. Such tests now are used on 60 to 70
percent of prospective workers in the United States, up from 30 to 40 percent about five years ago, estimates Josh Bersin of the consulting firm Deloitte.

Naturally, these hiring programs can’t incorporate information about how the candidate would actually perform at the company. That’s in the future, and therefore unknown. So like many other Big Data programs, they settle for proxies. And as we’ve seen, proxies are bound to be inexact and often unfair. In fact, the Supreme Court ruled in a 1971 case,
Griggs v. Duke Power Company
, that intelligence tests for hiring were discriminatory and therefore illegal. One would think that case might have triggered some soul-searching. But instead the industry simply opted for replacements, including personality tests like one that red-flagged Kyle Behm.

Even putting aside the issues of fairness and legality, research suggests that personality tests are poor predictors of job performance.
Frank Schmidt, a business professor at the University of Iowa, analyzed a century of workplace productivity data to measure the predictive value of various selection processes. Personality tests ranked low on the scale—they were only one-third as predictive as cognitive exams, and also far below reference checks. This
is particularly galling because certain personality tests, research shows, can actually help employees gain insight into themselves. They can also be used for team building and for enhancing communication. After all, they create a situation in which people think explicitly about how to work together. That intention alone might end up creating a better working environment. In other words, if we define the goal as a happier worker, personality tests might end up being a useful tool.

But instead they’re being used as a filter to weed out applicants. “
The primary purpose of the test,” said Roland Behm, “is not to find the best employee. It’s to exclude as many people as possible as cheaply as possible.”

You might think that personality tests would be easy to game. If you go online to take a Five Factor Personality Test, it looks like a cinch. One question asks: “Have frequent mood swings?” It would probably be smart to answer “very inaccurate.” Another asks: “Get mad easily?” Again, check no. Not too many companies want to hire hotheads.

In fact, companies can get in trouble for screening out applicants on the basis of such questions.
Regulators in Rhode Island found that CVS Pharmacy was illegally screening out applicants with mental illnesses when a personality test required respondents to agree or disagree to such statements as “People do a lot of things that make you angry” and “There’s no use having close friends; they always let you down.” More intricate questions, which are harder to game, are more likely to keep the companies out of trouble. Consequently, many of the tests used today force applicants to make difficult choices, likely leaving them with a sinking feeling of “Damned if I do, damned if I don’t.”

McDonald’s, for example, asked prospective workers to choose which of the following best described them:

“It is difficult to be cheerful when there are many problems to
take care of”
or
“Sometimes, I need a push to get started on my work.”

The
Wall Street Journal
asked an industrial psychologist, Tomas Chamorro-Premuzic, to analyze thorny questions like these. The first item, Chamorro-Premuzic said, captured “individual differences in neuroticism and conscientiousness”; the second, “low ambition and drive.” So the prospective worker is pleading guilty to being either high-strung or lazy.

A Kroger question was far simpler: Which adjective best describes you at work, unique or orderly?

Answering “unique,” said Chamorro-Premuzic, captures “high self concept, openness and narcissism,” while “orderly” expresses conscientiousness and self control.

Note that there’s no option to answer “all of the above.” Prospective workers must pick one option, without a clue as to how the program will interpret it. And some of the analysis will draw unflattering conclusions. If you go to a kindergarten class in much of the country, for example, you’ll often hear teachers emphasize to the children that they’re unique. It’s an attempt to boost their self-esteem and, of course, it’s true. Yet twelve years later, when that student chooses “unique” on a personality test while applying for a minimum-wage job, the program might read the answer as a red flag: Who wants a workforce peopled with narcissists?

Other books

Firesong by William Nicholson
My Name is Red by Orhan Pamuk
Never Me by Kate Stewart
The Third Claw of God by Adam-Troy Castro
Read It and Weep! by P.J. Night
King's Folly (Book 2) by Sabrina Flynn
A Kiss In The Dark by Kimberly Logan
White Lies by Jayne Ann Krentz
Ashes to Ashes by Richard Kluger