Read Bad Science Online

Authors: Ben Goldacre

Tags: #General, #Life Sciences, #Health & Fitness, #Errors, #Health Care Issues, #Essays, #Scientific, #Science

Bad Science (19 page)

BOOK: Bad Science
3.93Mb size Format: txt, pdf, ePub
ads

The Single Cheap Solution that will Solve all the Problems in the Entire World

 

What’s truly extraordinary is that almost all these problems—the suppression of negative results, data dredging, hiding unhelpful data, and more—could largely be solved with one very simple intervention that would cost almost nothing: a clinical trials register, public, open, and properly enforced. This is how it would work. You’re a drug company. Before you even start your study, you publish the protocol for it, the methods section of the paper, somewhere public. This means that everyone can see what you’re going to do in your trial, what you’re going to measure, how, in how many people, and so on,
before you start
.

The problems of publication bias, duplicate publication, and hidden data on side effects, which all cause unnecessary death and suffering, would be eradicated overnight, in one fell swoop. If you registered a trial, and conducted it, but it didn’t appear in the literature, it would stick out like a sore thumb. Everyone, basically, would assume you had something to hide, because you probably would. There are trials registers at present, but they are a mess.

How much of a mess is illustrated by this last drug company ruse: “moving the goalposts.” In 2002 Merck and Schering-Plough began a trial to look at ezetimibe, a drug to reduce cholesterol. They started out saying they were going to measure one thing as their test of whether the drug worked, but then announced, after the results were in, that they were going to count something else as the real test instead. This was spotted, and they were publicly rapped. Why? Because if you measure lots of things (as they did), some might be positive simply by chance. You cannot find your starting hypothesis in your final results. It makes the stats go all wonky.

Advertisements

 

Direct-to-consumer drug ads are properly bizarre, especially the TV ones. Your life is in disarray; your restless legs/migraine/cholesterol have taken over; all is panic; there is no sense anywhere. Then, when you take the right pill, suddenly the screen brightens up into a warm yellow, granny’s laughing, the kids are laughing, the dog’s tail is wagging, some nauseating child is playing with the hose on the lawn, spraying a rainbow of water into the sunshine while absolutely laughing his head off as all your relationships suddenly become successful again. All you have to do is “ask your doctor” and life will be good. It’s worth noting that drug adverts aimed directly at the public are legally allowed only in the United States and New Zealand, as pretty much everywhere else in the developed world has banned them, for the simple reason that they work. Patients are so much more easily misled by drug company advertising than doctors that the budget for direct-to-consumer advertising in America has risen twice as fast as the budget for addressing doctors directly. These ads have been closely studied by medical academic researchers and have been repeatedly shown to increase patients’ requests for the advertised drugs, as well as doctors’ prescriptions for them. Even ads “raising awareness of a condition” under tighter Canadian regulations have been shown to double demand for a specific drug to treat that condition.

This is why drug companies are keen to sponsor patient groups, or to exploit the media for their campaigns, as has been seen recently in the news stories singing the praises of the breast cancer drug Herceptin or Alzheimer’s drugs of borderline efficacy.

These advocacy groups demand vociferously in the media that the companies’ drugs should be funded. I know people associated with these patient advocacy groups—academics—who have spoken out and tried to change their stance, without success: because in the case of the British Alzheimer’s campaign in particular, it struck many people that the demands were rather one-sided. The National Institute for Clinical Excellence (NICE), which gives advice on whether drugs should be funded by the government, concluded that it couldn’t justify paying for Alzheimer’s drugs, partly because the evidence for their efficacy was weak and often looked only at soft, surrogate outcomes. The evidence for these drugs is often weak, because the drug companies have failed to subject their medications to sufficiently rigorous testing on real-world outcomes: rigorous testing that would be much less guaranteed to produce a positive result. Do patient organizations challenge the manufacturers to do better research? Do their members walk around with large placards campaigning against “surrogate outcomes in drugs research,” demanding “More Fair Tests”? No. Oh, God. Everybody’s bad. How did things get so awful?

10
 
Why Clever People Believe Stupid Things
 

The real purpose of the scientific method is to make sure nature hasn’t misled you into thinking you know something you actually don’t know.


Robert Pirsig,
Zen and the Art of Motorcycle Maintenance

 

Why do we have statistics, why do we measure things, and why do we count? If the scientific method has any authority—or, as I prefer to think of it, value—it is because it represents a systematic approach, but this is valuable only because the alternatives can be misleading. When we reason informally—call it intuition, if you like—we use rules of thumb that simplify problems for the sake of efficiency. Many of these shortcuts have been well characterized in a field called heuristics, and they are efficient ways of knowing in many circumstances.

This convenience comes at a cost—false beliefs—because there are systematic vulnerabilities in these truth-checking strategies that can be exploited. This is not dissimilar to the way that paintings can exploit shortcuts in our perceptual system: as objects become more distant, they appear smaller, and “perspective” can trick us into seeing three dimensions where there are only two, by taking advantage of this strategy used by our depth-checking apparatus. When our cognitive system—our truth-checking apparatus—is fooled, then, much like seeing depth in a flat painting, we come to erroneous conclusions about abstract things. We might misidentify normal fluctuations as meaningful patterns, for example, or ascribe causality where in fact there is none.

These are cognitive illusions, a parallel to optical illusions. They can be just as mind-boggling, and they cut to the core of why we do science, rather than base our beliefs on intuition informed by a “gist” of a subject acquired through popular media: because the world does not provide you with neatly tabulated data on interventions and outcomes. Instead it gives you random, piecemeal data in dribs and drabs over time, and trying to construct a broad understanding of the world from a memory of your own experiences would be like looking at the ceiling of the Sistine Chapel through a long, thin cardboard tube: you can try to remember the individual portions you’ve spotted here and there, but without a system and a model, you’re never going to appreciate the whole picture.

Let’s begin.

Randomness

 

As human beings we have an innate ability to make something out of nothing. We see shapes in the clouds and a man in the moon; gamblers are convinced that they have “runs of luck” we take a perfectly cheerful heavy metal record, play it backward, and hear hidden messages about Satan. Our ability to spot patterns is what allows us to make sense of the world, but sometimes, in our eagerness, we are oversensitive and trigger-happy and mistakenly spot patterns where none exist.

In science, if you want to study a phenomenon, it is sometimes useful to reduce it to its simplest and most controlled form. There is a prevalent belief among sporting types that sportsmen, like gamblers (except more plausibly), have runs of luck. People ascribe this to confidence, “getting your eye in,” “warming up,” or more, and while it might exist in some games, statisticians have looked in various places where people have claimed it to exist and found no relationship between, say, hitting a home run in one inning, then hitting a home run in the next.

Because the “winning streak” is such a prevalent belief, it is an excellent model for looking at how we perceive random sequences of events. This was used by an American social psychologist named Thomas Gilovich in a classic experiment. He took basket ball fans and showed them a random sequence of Xs and Os, explaining that they represented a player’s hits and misses, and then asked them if they thought the sequences demonstrated streak shooting.

Here is a random sequence of figures from that experiment. You might think of it as being generated by a series of coin tosses.

 

 

OXXXOXXXOXXOOOXOOXXOO

 

 

The subjects in the experiment were convinced that this sequence exemplified streak shooting or runs of luck, and it’s easy to see why, if you look again: six of the first eight shots were hits. No, wait: eight of the first eleven shots were hits. No way is that random…

What this ingenious experiment shows is how bad we are at correctly identifying random sequences. We are wrong about what they should look like: we expect too much alternation, so truly random sequences seem somehow too lumpy and ordered. Our intuitions about the most basic observation of all, distinguishing a pattern from mere random background noise, are deeply flawed.

This is our first lesson in the importance of using statistics instead of intuition. It’s also an excellent demonstration of how strong the parallels are between these cognitive illusions and the perceptual illusions with which we are more familiar. You can stare at a visual illusion all you like, talk or think about it, but it will still look “wrong.” Similarly, you can look at that random sequence above as hard as you like: it will still look lumpy and ordered, in defiance of what you now know.

 

Regression to the Mean

 

We have already looked at regression to the mean in our section on homeopathy; it is the phenomenon whereby when things are at their extremes, they are likely to settle back down to the middle, or regress to the mean.

We saw this with reference to the
Sports Illustrated
jinx, but also applied it to the matter in hand, the question of people getting better; we discussed how people will do something when their back pain is at its worst—visit a homeopath, perhaps—and how although it was going to get better anyway (because when things are at their worst, they generally do), they ascribe their improvement to the treatment.

There are two discrete things happening when we fall prey to this failure of intuition. First, we have failed to spot correctly the pattern of regression to the mean. Second crucially, we have then decided that something must have
caused
this illusory pattern: specifically, a homeopathic remedy, for example. Simple regression is confused with causation, and this is perhaps quite natural for animals like humans, whose success in the world depends on our being able to spot causal relationships rapidly and intuitively: we are inherently oversensitive to them.

To an extent, when we discussed the subject earlier, I relied on your goodwill and on the likelihood that from your own experience you could agree that this explanation made sense. But it has been demonstrated in another ingeniously pared-down experiment, in which all the variables were controlled, but people still saw a pattern and causality where there was none.

The subjects in the experiment played the role of a teacher trying to make a child arrive punctually at school for 8:30 a.m. They sat at a computer on which it appeared that each day, for fifteen consecutive days, the supposed child would arrive sometime between 8:20 and 8:40, but unbeknownst to the subjects, the arrival times were entirely random and predetermined before the experiment began. Nonetheless, all the subjects were allowed to use punishments for lateness and rewards for punctuality, in whatever permutation they wished. When they were asked at the end to rate their strategy, 70 percent concluded that reprimand was more effective than reward in producing punctuality from the child.

These subjects were convinced that their intervention had an effect on the punctuality of the child, despite the child’s arrival time being entirely random and exemplifying nothing more than regression to the mean. By the same token, when homeopathy has been shown to elicit no more improvement than placebo, people are still convinced that it has a beneficial effect on their health.

To recap:

 
  1. We see patterns where there is only random noise.
  2. We see causal relationships where there are none.
 

These are two very good reasons to measure things formally. It’s bad news for intuition already. Can it get much worse?

The Bias Toward Positive Evidence

 

It is the peculiar and perpetual error of the human understanding to be more moved and excited by affirmatives than negatives.

—Francis Bacon

 

It gets worse. It seems we have an innate tendency to seek out and overvalue evidence that confirms a given hypothesis. To try to remove this phenomenon from the controversial arena of CAM—or the MMR scare, which is where this is headed—we are lucky to have more pared-down experiments that illustrate the general point.

Imagine a table with four cards on it, marked “A,” “B,” “2,” and “3.” Each card has a letter on one side and a number on the other. Your task is to determine whether all cards with a vowel on one side have an even number on the other. Which two cards would you turn over? Everybody chooses the “A” card, obviously, but like many people—unless you really forced yourself to think hard about it—you would probably choose to turn over the “2” card as well. That’s because these are the cards that would produce information
consistent
with the hypothesis you are supposed to be testing. But in fact, the cards you need to flip are the “A” and the “3,” because finding a vowel on the back of the “2” would tell you nothing about “all cards,” it would just confirm “some cards,” whereas finding a vowel on the back of “3” would comprehensively disprove your hypothesis. This modest brainteaser demonstrates our tendency, in our unchecked intuitive reasoning style, to seek out information that confirms a hypothesis, and it demonstrates the phenomenon in a value-neutral situation.

This same bias in seeking out confirmatory information has been demonstrated in more sophisticated social psychology experiments. When trying to determine if someone is an “extrovert,” for example, many subjects will ask questions for which a positive answer would confirm the hypothesis (“Do you like going to parties?”) rather than refute it.

We show a similar bias when we interrogate information from our own memory. In one experiment, subjects first read a vignette about a woman who exemplified various introverted and extroverted behaviors and then were divided into two groups. One group was asked to consider the woman’s suitability for a job as a librarian, while the other was asked to consider her suitability for a job as a real estate agent. Both groups were asked to come up with examples of both her extroversion and her introversion. The group considering her for the librarian job recalled more examples of introverted behavior, while the group considering her for a job selling real estate cited more examples of extroverted behavior.

This tendency is dangerous, because if you ask only questions that confirm your hypothesis, you will be more likely to elicit information that confirms it, giving a spurious sense of confirmation. It also means—if we think more broadly—that the people who pose the questions already have a head start in popular discourse.

So we can add to our running list of cognitive illusions, biases, and failings of intuition:

 
  • 3. We overvalue confirmatory information for any given hypothesis.
  • 4. We seek out confirmatory information for any given hypothesis.
 

Biased by Our Prior Beliefs

 

[I] followed a golden rule, whenever a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favorable ones.

—Charles Darwin

 

This is the reasoning flaw that everybody does know about, and even if it’s the least interesting cognitive illusion—because it’s an obvious one—it has been demonstrated in experiments that are so close to the bone that you may find them, as I do, quite unnerving.

The classic demonstration of people’s being biased by their prior beliefs comes from a study looking at beliefs about the death penalty. A large number of proponents and opponents of state executions were collected. They all were shown two pieces of evidence on the deterrent effect of capital punishment: one supporting a deterrent effect, the other providing evidence against it.

The evidence they were shown was as follows:

 
  • A comparison of murder rates in one U.S. state before the death penalty was brought in, and after.
  • A comparison of murder rates in different states, some with and some without the death penalty.
 

But there was a very clever twist. The proponents and opponents of capital punishment were each further divided into two smaller groups. So overall, half the proponents and opponents of capital punishment had their opinions reinforced by before/after data but challenged by state/state data, and vice versa.

BOOK: Bad Science
3.93Mb size Format: txt, pdf, ePub
ads

Other books

A Patchwork Planet by Tyler, Anne
Rabbit Racer by Tamsyn Murray
Ethel Merman: A Life by Brian Kellow
The Graduate by Charles Webb