Read How We Know What Isn't So Online

Authors: Thomas Gilovich

Tags: #Psychology, #Developmental, #Child, #Social Psychology, #Personality, #Self-Help, #Personal Growth, #General

How We Know What Isn't So (5 page)

BOOK: How We Know What Isn't So
10.64Mb size Format: txt, pdf, ePub
ads

First, people tend to be insufficiently conservative or “regressive” when making predictions. Parents expect a child who excels in school one year to do as well or better the following year; shareholders expect a company that has had a banner year to earn as much or more the next. In each case, the predicted performance is simply matched to initial performance without taking into account the likely effects of regression. This tendency for people’s predictions to be insufficiently regressive has been implicated in the high rate of business failures, in disastrous personnel hiring decisions, and in non-conservative risk estimates made by certified public accountants.

A particularly striking demonstration of people’s insensitivity to regression effects was provided by an experiment in which the participants were asked to predict the grade-point averages (GPAs) of ten hypothetical students on the basis of one of two types of information.
16
Some were given information that is perfectly predictive of GPA (the targets’ GPA not in “raw” form such as “4.0,” but in “percentile” form such as “99th percentile”). Others were given information that was described as less diagnostic of GPA (the targets’ score on a test of sense of humor). Statistical theory dictates that the better one’s basis of prediction, the less regressive one needs to be. Thus, those who based their estimates on the perfectly predictive information need not have been regressive at all; in contrast, the estimates based on the students’ sense of humor should have been regressed considerably (i.e., a nearly-average GPA should have been predicted for each student, regardless of the student’s score on the relatively uninformative test of sense of humor).

That is not what happened. The predictions made by the respondents in the two groups were nearly identical, and only minimally regressive. Students who supposedly scored at the 90th percentile, for example, were predicted to have the same GPA, regardless of whether their percentile ranking referred to their GPA or their sense of humor. The regression effect was just not incorporated into the participants’ predictions.

This tendency to make non-regressive predictions, like the clustering illusion, can be attributed to the compelling nature of judgment by representativeness. In this case, people’s judgments reflect the intuition that the prediction ought to resemble the predictor as much as possible, and thus that it should deviate from the average to the same extent. The most representative son of a 6′5″ father is one who is 6’′5″ himself—a height that is reached by only a minority of such fathers’ sons. Once again, judgment by representativeness produces overgeneralization. In this case, people correctly recognize that if variables x and y are related, the value of x is helpful in predicting y, and that therefore relatively extreme values of y should be predicted for extreme values of x (e.g., we expect tall parents to have tall children, and our expectation is usually confirmed). However, this intuition is often taken too far, and the predictions made about y tend to be as
extreme
as the input variable x rather than regressed toward the average of y (e.g., few parents who are 6′5″ have children as tall as they are).

A second, related problem that people have with regression is known as the regression fallacy. The regression fallacy refers to the tendency to fail to recognize statistical regression when it occurs, and instead to “explain” the observed phenomena with superfluous and often complicated causal theories. A lesser performance that follows a brilliant one is attributed to slacking off; a slight improvement in felony statistics following a crime wave is attributed to a new law enforcement policy. The regression fallacy is analogous to the clustering illusion: Both represent cases of people extracting too much meaning from chance events. By developing elaborate explanations for phenomena that are the predictable result of statistical regression, people form spurious beliefs about phenomena and causal relations in everyday life.

Examples of erroneous beliefs produced by the regression fallacy pervade many walks of life. There are many such examples in the sports world, for instance, one of the best being the widespread belief in the “
Sports Illustrated
jinx.” Many individuals associated with the world of athletics believe that it is bad luck to be pictured on the cover of Sports Illustrated magazine.
17
Doing so is thought to spell doom for whatever success was responsible for getting oneself or one’s team on the cover in the first place. Olympic medalist Shirley Babashoff, for example, reportedly balked at getting her picture taken for
Sports Illustrated
before the 1976 Olympics because of her fear of the jinx (she was eventually persuaded to pose when reminded that a cover story on Mark Spitz had not prevented him from winning seven gold medals in the previous Olympic games).

It does not take much statistical sophistication to see how regression effects may be responsible for the belief in the
Sports Illustrated
jinx. Athletes’ performances at different times are imperfectly correlated. Thus, due to regression alone, we can expect an extraordinarily good performance to be followed, on the average, by a somewhat less extraordinary performance. Athletes appear on the cover of
Sports Illustrated
when they are newsworthy—i.e., when their performance is extraordinary. Thus, an athlete’s superior performance in the weeks preceding a cover story is very likely to be followed by somewhat poorer performance in the weeks after. Those who believe in the jinx, like those who believe in the hot hand, are mistaken, not in what they observe, but in how they interpret what they see. Many athletes do suffer a deterioration in their performance after being pictured on the cover of
Sports Illustrated
, and the mistake lies in citing a jinx, rather than citing regression as the proper interpretation of this phenomenon.

The regression fallacy also plays a role in shaping parents’ and teachers’ beliefs about the relative effectiveness of reward and punishment in producing desired behavior and learning. Psychologists have known for some time that rewarding desirable responses is generally more effective in shaping behavior than punishing undesirable responses.
19
However, the average person tends to find this fact surprising, and punishment has been the preferred reinforcer for the majority of parents in both modern society
19
and in earlier periods.
20
One explanation for this discrepancy between common practice and the recommendation of psychologists is that regression effects may mask the true effectiveness of reward, and spuriously boost the apparent effectiveness of punishment. Rewards are most likely to be given following another person’s extraordinarily good performance. However, regression guarantees that on the average such extraordinary performances will be followed by deterioration. The reward will thus appear ineffective or counter-productive. In contrast, because bad performances tend to be followed by improvement, any punishment meted out after a disappointing performance will appear to have been beneficial. Regression effects, in other words, serve to “punish the administration of reward, and to reward the administration of punishment.”
21

An intriguing demonstration of this phenomenon was provided by an experiment in which the participants played the role of a teacher trying to encourage a hypothetical student to arrive for school on time at 8:30
A.M.
22
A computer displayed the “student’s” arrival time, which varied from 8:20 to 8:40, for each of 15 consecutive days, one at a time. On each day, the participants were allowed to praise, reprimand, or issue no comment to the student. Predictably, the participants elected to praise the student whenever he was early or on time, and to reprimand him when he was late. The student’s arrival time, however, was pre-programmed and thus was not connected to the subject’s response for the previous day. Nevertheless, due to regression alone, the student’s arrival time tended to improve (to regress toward 8:30) after he was punished for being late, and to deteriorate (again, by regressing to 8:30) after being praised for arriving early. As a result, 70% of the subjects concluded that reprimand was more effective than praise in producing prompt attendance by the student. Regression effects teach us specious lessons about the relative effectiveness of reward and punishment.

CODA
 

Perhaps the reader has anticipated how the two difficulties discussed in this chapter—the clustering illusion and the regression fallacy—can combine to produce firmly-held, but questionable beliefs. In particular, they may combine to produce a variety of superstitious beliefs about how to end a bad streak or how to prolong a good one. A modest “streak” of good or bad performance may be assigned too much significance initially, making its likely regression even more salient and in even greater need of explanation. An episode I witnessed during a recent trip to Israel provides a good example.

A flurry of deaths by natural causes in the northern part of the country led to speculation about some new and unusual threat. It was not determined whether the increase in the number of deaths was within the normal fluctuation in the death rate that one can expect by chance. Instead, remedies for the problem were quickly put in place. In particular, a group of rabbis attributed the problem to the sacrilege of allowing women to attend funerals, formerly a forbidden practice. The remedy was a decree that subsequently barred women from funerals in the area. The decree was quickly enforced, and the rash of unusual deaths subsided—leaving one to wonder what the people in this area have concluded about the effectiveness of their remedy.
23

Examples like this illustrate how the misperception of random sequences and the misinterpretation of regression can lead to the formation of superstitious beliefs. Furthermore, these beliefs and how they are accounted for do not remain as isolated convictions, but serve to bolster or create more general beliefs—in this case about the wisdom of religious officials, the “proper” role of women in society, and even the existence of a powerful and watchful god.

*
The sequence is random in the sense that there is no correlation between the outcomes of consecutive shots. The number of adjacent shots with the same outcome (i.e., xx or oo) in the sequence is equal to the number of adjacent shots with different outcomes (i.e., xo or ox).

*
The appropriate test in this case is the chi-square test, and the obtained chi-square value is 20.69. The probability of obtaining a chi-square value this large by chance alone is less than 1 in 1,000.

*
To understand why regression occurs, consider the relation between a person’s scores on the Scholastic Aptitude Test (SAT) on two occasions. Each score can be thought of as a reflection of the person’s true ability level plus some “chance error” that either improves or lowers the observed result (e.g., some answers may have been mere guesses that turned out to be correct or incorrect, the room might be unusually noisy or quiet, the person might have slept poorly or well the previous evening, etc.). A very high score is more likely to be the result of a less extraordinary true ability that has been helped by chance error, than of an even more extraordinary true ability that has been hurt by it—simply because there are more of the former than the latter (truly extraordinary ability is rare by definition). As a consequence, an extraordinarily high score at one time will tend to be less extreme the next time because it is unlikely to be paired again with such a favorable chance error. To see this more clearly, consider the case in which someone receives the highest score possible on the SAT, 800 points. Because those who receive such scores cannot score any higher the next time, their scores on a subsequent test will either be the same (the person has true 800 “aptitude”) or lower (the person has less “aptitude” but was lucky the first time). On average, then, the SAT scores of those getting an 800 the first time will be lower than 800 the second. Analogous logic explains why those who do poorly the first time tend to do better the second.

3
Too Much from Too Little
 
The Misinterpretation of Incomplete and Unrepresentative Data
 

They still cling stubbornly to the idea that the only good answer is a yes answer. If they say, “Is the number between 5,000 and 10,000?” and I say yes, they cheer; if I say no, they groan, even though they get exactly the same amount of information in either case.

John Holt,
Why Children Fail

 

“I
ve seen it happen.” “I know someone who did.” “You see it all the time.” What these statements have in common is that they are often cited in support of a person’s beliefs. “I know horoscopes can predict the future, because I’ve seen it happen.” “I am convinced you can cure cancer with positive thinking because I know somebody who whipped the Big C after practicing mental imagery.” “Of course there’s a second-year slump, you see it all the time.” Sometimes these statements are offered as justifications for the speaker’s own beliefs; at other times they are designed to convince the listener of some important truth. In either case, they represent a conviction that a particular belief is warranted in light of the evidence presented.

Such convictions are on the right track. Evidence of the type mentioned in these statements is certainly
necessary
for the beliefs to be true. If a phenomenon exists, there must be some positive evidence of its existence—“instances” of its existence must be visible to oneself or to others. But it should be clear that such evidence is hardly
sufficient
to warrant such beliefs. Instances of cancer remission in patients who practice mental imagery do not constitute sufficient evidence that mental imagery helps ameliorate cancer (after all, some people get better without practicing visualization and some who practice it do not get better). Unfortunately, people do not always appreciate this distinction between necessary and sufficient evidence, and they can be overly impressed by data that, at best, only
suggests
that a belief may be true. The main thrust of this chapter is that this willingness to base conclusions on incomplete or unrepresentative information is a common cause of people’s questionable and erroneous beliefs. Because people often fail to recognize that a particular belief rests on inadequate evidence, the belief enjoys an “illusion of validity”
1
and is considered, not a matter of opinion or values, but a logical conclusion from the objective evidence that any rational person would make.

BOOK: How We Know What Isn't So
10.64Mb size Format: txt, pdf, ePub
ads

Other books

The Glass Prison by Monte Cook
Everyone's Dead But Us by Zubro, Mark Richard
The Birds by Tarjei Vesaas
Shana's Guardian by Sue Lyndon
Tomorrow's Dreams by Heather Cullman
Tears of a Dragon by Bryan Davis