Authors: Tobias Moskowitz
On its face, it seems an easy call, right? You’d choose to do it because not vaccinating has twice the mortality rate as the
vaccination. However, most parents in the survey opted
not
to vaccinate their children. Why? Because it
caused
5 deaths per 10,000; never mind that without the vaccine, their children faced
twice the risk of death from the flu. Those who would not permit vaccinations indicated that they would “feel responsible if anything happened because of [the] vaccine.” The same parents tended to dismiss the notion that they would “feel responsible if anything had happened because I failed to vaccinate.” In other words, many parents felt more responsible for a bad outcome if it followed their own actions than if it simply resulted from lack of action.
In other studies, subjects consistently view various actions
taken
as less moral than actions not taken—even when the results are the same or worse. Subjects, for instance, were asked to assess the following situation: John, a tennis player, has to face a tough opponent tomorrow in a decisive match. John knows his opponent is allergic to a particular food. In the first scenario, John recommends the food containing the allergen to hurt his unknowing opponent’s performance. In the second, the opponent mistakenly orders the allergenic food, and John, knowing his opponent might get sick, says nothing. A majority of people judged that John’s
action
of recommending the allergenic food was far more immoral than John’s
inaction
of not informing the opponent of the allergenic substance. But are they really different?
Think about how we act in our daily lives. Most of us probably would contend that telling a direct lie is worse than withholding the truth. Missing the opportunity to pick the right spouse is bad but not nearly as bad as actively choosing the wrong one. Declining to eat healthy food may be a poor choice; eating junk food is worse. You might feel a small stab of regret over not raising your hand in class to give the correct answer, but raise your hand and provide the wrong answer and you feel much worse.
Psychologists have found that people view inaction as less causal, less blameworthy, and less harmful than action even when the outcomes are the same or worse. Doctors subscribe to this philosophy. The first principle imparted to all medical students is “Do no harm.” It’s not, pointedly, “Do some good.” Our legal system draws a similar distinction, seldom assigning an affirmative
duty
to rescue. Submerge someone in water and you’re in trouble.
Stand idly by while someone flails in the pool before drowning and—unless you’re the lifeguard or a doctor—you won’t be charged with failing to rescue that person.
In business, we see the same
omission bias. When is a stockbroker in bigger trouble? When she neglects to buy a winning stock and, say, misses getting in on the Google IPO? Or when she invests in a dog, buying shares of Lehman Brothers with your retirement nest egg? Ask hedge fund managers and, at least in private, they’ll confess that losing a client’s money on a wrong pick gets them fired far more easily than missing out on the year’s big winner. And they act accordingly.
In most large companies, managers are obsessed with avoiding actual errors rather than with missing opportunities. Errors of commission are often attributed to an individual, and responsibility is assigned. People rarely are held accountable for failing to act, though those errors can be just as costly. As
Jeff Bezos, the founder of Amazon, put it during a 2009 management conference: “People overfocus on errors of commission. Companies overemphasize how expensive failure’s going to be. Failure’s not that expensive.… The big cost that most companies incur is much harder to notice, and those are errors of omission.”
This same thinking extends to sports officials. When referees are trained and evaluated in the NBA, they are told that there are four basic kinds of calls: correct calls, incorrect calls, correct noncalls, and incorrect noncalls. The goal, of course, is to be correct on every call and noncall. But if you make a call, you’d better be right. “It’s late in the game and, let’s say, there’s goaltending and you miss it. That’s an incorrect noncall and that’s bad,” says
Gary Benson, an NBA ref for 17 years. “But let’s say it’s late in the game and you call goaltending on a play and the replay shows it was an incorrect call. That’s when you’re in a
really
deep mess.”
*
Especially during crucial intervals, officials often take pains not to insinuate themselves into the game. In the NBA, there’s an
unwritten directive: “When the game steps up, you step down.” “As much as possible, you gotta let the players determine who wins and loses,” says
Ted Bernhardt, another longtime NBA ref. “It’s one of the first things you learn on the job. The fans didn’t come to see you. They came to see the athletes.”
It’s a noble objective, but it expresses an unmistakable
bias
, and one could argue that it is worse than the normal, random mistakes officials make during a game. Random referee errors, though annoying, can’t be predicted and tend to balance out over time, not favoring one team over the other. With random errors, the system can’t be gamed. A systematic
bias
is different, conferring a clear advantage (or disadvantage) on one type of player or team over another and enabling us—to say nothing of savvy teams, players, coaches, executives, and, yes, gamblers—to predict who will benefit from the officiating in which circumstances. As fans, sure, we want games to be officiated accurately, but what we should
really
want is for games to be officiated without bias. Yet that’s not the case.
Start with
baseball. In 2007, Major League Baseball’s website,
mlb.com
, installed cameras in ballparks to track the location of every pitch, accurate to within a centimeter, so that fans could follow games on their handhelds, pitch by pitch. The data—called Pitch f/x—track not only the location but also the speed, movement, and type of pitch. We used the data, containing nearly 2 million pitches and 1.15 million
called
pitches, for a different purpose: to evaluate the accuracy of umpires. First, the data reveal that umpires are staggeringly accurate. On average, umpires make erroneous calls only 14.4 percent of the time. That’s impressive, especially considering that the average pitch starts out at 92 mph, crosses the plate at more than 85 mph, and usually has been garnished with all sorts of spin and movement.
But those numbers change dramatically depending on the situation. Suppose a batter is facing a two-strike count; one more called strike and he’s out. Looking at all called pitches in baseball
over the last three years that are actually within the strike zone on two-strike counts (and removing full counts where there are two strikes and three balls on the batter), we observed that umpires make the correct call only 61 percent of the time. That is, umpires erroneously call these pitches balls 39 percent of the time. So on a two-strike count, umpires have more than twice their normal error rate—and in the batters’ favor.
What about the reverse situation, when the batter has a three-ball count and the next pitch could result in a walk? Omission bias suggests that umpires will be more reluctant to call the fourth ball, which would give the batter first base. Looking at all pitches that are actually outside the strike zone, the normal error rate for an umpire is 12.2 percent. However, when there are three balls on the batter (excluding full counts), the umpire will erroneously call strikes on the same pitches 20 percent of the time.
In other words, rather than issue a walk or strikeout, umpires seem to want to prolong the at-bat and let the players determine the outcome. They do this even if it means making an incorrect call—or, at the very least, refraining from making a call they would make under less pressured circumstances.
The graph on
this page
plots the actual strike zone according to MLB rules, represented by the box outlined in black. Taking all called pitches, we plot the “empirical” strike zone based on calls the umpire is actually making in two-strike and three-ball counts. Using the Pitch f/x data, we track the location of every called pitch and define any pitch that is called a strike more than half the time to be within the empirical strike zone. The strike zone for two-strike counts is represented by the dashed lines, and for three-ball counts it is represented by the darker solid area.
The graph shows that the umpire’s strike zone shrinks considerably when there are two strikes on the batter. Many pitches that are technically within the strike zone are not called strikes when that would result in a called third strike. Conversely, the umpire’s strike zone expands significantly when there are three balls on the
batter, going so far as to include pitches that are more than several inches outside the strike zone. To give a sense of the difference, the strike zone on three-ball counts is 93 square inches larger than the strike zone on two-strike counts.
*
Box represents the rules-mandated strike zone. Tick marks represent a half inch.
The omission bias should be strongest when making the right call would have a big influence on the game but missing the call would not. (Call what should be a ball a strike on a 3–0 pitch and, big deal, the count is only 3–1.) Keeping that in mind, look at the next graph. The strike zone is smallest when there are two strikes and no balls (count is 0–2) and largest when there are three balls and no strikes (count is 3–0).
Box represents the rules-mandated strike zone. Tick marks represent a half inch.
The strike zone on 3–0 pitches is
188
square inches larger than it is on 0–2 counts. That’s an astonishing difference, and it can’t be a random error.
We also can look at the specific location of pitches. Even for obvious pitches, such as those in the dead center of the plate or those
waaay
outside the strike zone—which umpires rarely miss—the pitch will be called differently depending on the strike count. The umpire will make a bad call to prolong the at-bat even when the pitch is obvious. So what happens with the less obvious pitches? On the most ambiguous pitches, those just on or off the corners of the strike zone that are not clearly balls or strikes, umpires have the most discretion. And here, not surprisingly, omission bias is the most extreme. The table below shows how strike-ball calls vary considerably depending on the situation.