Authors: Ben Goldacre
Tags: #General, #Life Sciences, #Health & Fitness, #Errors, #Health Care Issues, #Essays, #Scientific, #Science
You take two hundred patients, say, all suitable for homeopathic treatment, currently in a doctor’s clinic, and all willing to be referred on for homeopathy, then split them randomly into two groups of one hundred. One group gets treated by a homeopath as normal, pills, consultation, smoke, and voodoo, on top of whatever other treatment they are having, same as in the real world. The other group just sits on the homeopathy waiting list, so they get “treatment as usual,” whether that is “neglect,” “family doctor treatment,” or whatever, but they get no homeopathy. Then you measure outcomes and compare who gets better the most.
You could argue that it would be a trivial positive finding, and that it’s obvious the homeopathy group would do better; but it’s the only piece of research really waiting to be done. This is a “pragmatic trial.” The groups aren’t blinded, but they couldn’t possibly be in this kind of trial, and sometimes we have to accept compromises in experimental methodology. It would be a legitimate use of public money (or perhaps money from Boiron, the homeopathic pill company valued at five hundred million dollars), but there’s nothing to stop homeopaths from just cracking on and doing it for themselves, because despite the homeopaths’ fantasies, born out of a lack of knowledge, that research is difficult, magical, and expensive, in fact, such a trial would be very cheap to conduct.
But it’s not really money that’s missing from the alternative therapy research community working with the ideas of this billion-dollar industry; it’s knowledge of evidence-based medicine and expertise in how to do a trial. Their literature and debates drip with ignorance and vitriolic anger at anyone who dares to appraise the trials. Their university courses, as far as they ever even dare to admit what they teach on them (it’s all suspiciously hidden away), seem to skirt around such explosive and threatening questions. I’ve suggested in various places, including at British academic conferences, that the single thing that would most improve the quality of evidence in CAM would be funding for a simple, evidence-based medicine hotline that anyone thinking about running a trial in a clinic could phone up and get advice on how to do it properly, to avoid wasting effort on an “unfair test” that would rightly be regarded with contempt by all outsiders.
In my pipe dream (I’m completely serious, if you’ve got the money) you’d need a handout, maybe a short course that people did to cover the basics, so they weren’t asking stupid questions, and phone support. In the meantime, if you’re a sensible homeopath and you want to do a pragmatic, “waiting-list-controlled trial” as I described above, you could maybe try the badscience website forums, where there are people who might be able to give some pointers (among the childish fighters and trolls…).
But would the homeopaths buy it? I think it would offend their sense of professionalism. You often see homeopaths trying to nuance their way through this tricky area, and they can’t quite make their minds up. Here, for example, is a Radio 4 interview, archived in full online, in which Dr. Elizabeth Thompson (consultant homeopathic physician and honorary senior lecturer at the Department of Palliative Medicine at the University of Bristol) has a go.
She starts off with some sensible stuff: homeopathy does work, but through nonspecific effects, the cultural meaning of the process, the therapeutic relationship, it’s not about the pills, and so on. She practically comes out and says that homeopathy is all about cultural meaning and the placebo effect. “People have wanted to say homeopathy is like a pharmaceutical compound,” she says, “and it isn’t, it is a complex intervention.”
Then the interviewer asks: “What would you say to people who go along to their high street pharmacy, where you can buy homeopathic remedies, they have hay fever and they pick out a hay-fever remedy, I mean presumably that’s not the way it works?” There is a moment of tension. Forgive me, Dr. Thompson, but I felt you didn’t want to say that the pills work, as pills, in isolation, when you buy them in a shop; apart from anything else, you’d already said that they don’t.
But she doesn’t want to break ranks and say the pills don’t work, either. I’m holding my breath. How will she do it? Is there a linguistic structure complex enough, passive enough, to negotiate through this? If there is, Dr. Thompson doesn’t find it: “They might flick through and they might just be spot-on…[but] you’ve got to be very lucky to walk in and just get the right remedy.” So the power is, and is not, in the pill: “P, and not-P,” as philosophers of logic would say.
If they can’t finesse it with the “power is not in the pill” paradox, how else do the homeopaths get around all this negative data? Dr. Thompson—from what I have seen—is a fairly clear-thinking and civilized homeopath. She is, in many respects, alone. Homeopaths have been careful to keep themselves outside the civilizing environment of the university, where the influence and questioning of colleagues can help refine ideas and weed out the bad ones. In their rare forays, they enter them secretively, walling themselves and their ideas off from criticism or review, refusing to share even what is in their exam papers with outsiders.
It is rare to find a homeopath engaging on the issue of the evidence, but what happens when they do? I can tell you. They get angry; they threaten to sue; they scream and shout at you at meetings; they complain spuriously and with ludicrous misrepresentations—time-consuming to expose, of course, but that’s the point of harassment—to the Press Complaints Commission and your editor; they send hate mail and accuse you repeatedly of somehow being in the pocket of big pharma (falsely, although you start to wonder why you bother having principles when faced with this kind of behavior). They bully, they smear, to the absolute top of the profession, and they do anything they can in a desperate bid to
shut you up
and avoid having a discussion about the evidence. They have even been known to threaten violence (I won’t go into it here, but I manage these issues extremely seriously).
I’m not saying I don’t enjoy a bit of banter. I’m just pointing out that you don’t get anything quite like this in most other fields, and homeopaths, among all the people in this book, with the exception of the odd nutritionist, seem to me to be a uniquely angry breed. Experiment for yourself by chatting with them about evidence, and let me know what you find.
By now your head is hurting, because of all those mischievous, confusing homeopaths and their weird, labyrinthine defenses; you need a lovely science massage. Why is evidence so complicated? Why do we need all these clever tricks, these special research paradigms? The answer is easy: the world is much more complicated than simple stories about pills making people get better. We are human, we are irrational, we have foibles, and the power of the mind over the body is greater than anything you have previously imagined.
For all the dangers of complementary and alternative medicine, to me the greatest disappointment is the way it distorts our understanding of our bodies. Just as the big bang theory is far more interesting than the creation story in Genesis, so the story that science can tell us about the natural world is far more interesting than any fable about magic pills concocted by an alternative therapist. To redress that balance, I’m offering you a whirlwind tour of one of the most bizarre and enlightening areas of medical research: the relationship between our bodies and our minds, the role of meaning in healing, and in particular the placebo effect.
Much like quackery, placebos became unfashionable in medicine once the biomedical model started to produce tangible results. An editorial in 1890 sounded its death knell, describing the case of a doctor who had injected his patient with water instead of morphine; she recovered perfectly well, but then discovered the deception, disputed the bill in court, and won. The editorial was a lament, because doctors have known that reassurance and a good bedside manner can be very effective for as long as medicine has existed. “Shall [the placebo] never again have an opportunity of exerting its wonderful psychological effects as faithfully as one of its more toxic conveners?” asked the
Medical Press
at the time.
Luckily, its use survived. Throughout history, the placebo effect has been particularly well documented in the field of pain, and some of the stories are striking. Henry Beecher, an American anesthetist, wrote about operating on a soldier with horrific injuries in a World War II field hospital, using salt water because the morphine was all gone, and to his astonishment the patient was fine. Peter Parker, an American missionary, described performing surgery without anesthesia on a Chinese patient in the mid-nineteenth century; after the operation, she “jumped upon the floor,” bowed, and walked out of the room as if nothing had happened.
Theodor Kocher performed sixteen hundred thyroidectomies without anesthesia in Switzerland in the 1890s, and I take my hat off to a man who can do complicated neck operations on conscious patients. Mitchel in the early twentieth century was performing full amputations and mastectomies, entirely without anesthesia; and surgeons from before the invention of anesthesia often described how some patients could tolerate knife cutting through muscle, and saw cutting through bone, perfectly awake and without even clenching their teeth. You might be tougher than you think.
These are just stories, and the plural of “anecdote” is not data. Everyone knows about the power of the mind—whether it’s stories of mothers enduring biblical pain to avoid dropping a boiling kettle on their babies or people lifting cars off their girlfriends like the Incredible Hulk—but devising an experiment that teases the psychological and cultural benefits of a treatment away from the biomedical effects is trickier than you might think. After all, what do you compare a placebo against? Another placebo? Or no treatment at all?
The Placebo on Trial
In most studies we don’t have a “no treatment” group to compare both the placebo and the drug with, and for a very good ethical reason: if your patients are ill, you shouldn’t be leaving them untreated simply because of your own mawkish interest in the placebo effect. In fact, in most cases today it is considered wrong even to use a placebo in a trial; whenever possible you should compare your new treatment with the best preexisting, current treatment.
This is not just for ethical reasons (although it is enshrined in the Declaration of Helsinki, the international ethics bible). Placebo-controlled trials are also frowned upon by the evidence-based medicine community, because it knows it’s an easy way to cook the books and get easy positive trial data to support your company’s big new investment. In the real world of clinical practice, patients and doctors aren’t so interested in whether a new drug works better than
nothing
; they’re interested in whether it works
better than the best treatment they already have
.
There have been occasions in medical history when researchers were more cavalier. The Tuskegee Syphilis Study, for example, is one of America’s most shaming hours: 399 poor, rural African-American men were recruited by the U.S. Public Health Service in 1932 for an observational study to see what happened if syphilis was left, very simply, untreated. Astonishingly, the study ran right through to 1972. In 1949 penicillin was introduced as an effective treatment for syphilis. These men did not receive that drug, nor did they receive Salvarsan, nor indeed did they receive an apology until 1997, from Bill Clinton.
If we don’t want to do unethical scientific experiments with “no treatment” groups on sick people, how else can we determine the size of the placebo effect on modern illnesses? First, and rather ingeniously, we can compare one placebo with another.
The first experiment in this field was a meta-analysis by Daniel Moerman, an anthropologist who has specialized in the placebo effect. He took the trial data from placebo-controlled trials of gastric ulcer medication, which was his first cunning move, because gastric ulcers are an excellent thing to study: their presence or absence is determined very objectively, with a gastroscopy camera passed down into the stomach, to avoid any doubt.
Moerman took only the placebo data from these trials, and then, in his second ingenious move, from all these studies, of all the different drugs, with their different dosing regimes, he took the ulcer-healing rate from the placebo arm of trials in which the placebo treatment was two sugar pills a day, and compared that with the ulcer-healing rate in the placebo arm of trials in which the placebo was four sugar pills a day. He found, spectacularly, that four sugar pills are better than two (these findings have also been replicated in a different data set, for those who are switched on enough to worry about the replicability of important clinical findings).
What the Treatment Looks Like
So four pills are better than two, but how can this be? Does a placebo sugar pill simply exert an effect like any other pill? Is there a dose response curve, as pharmacologists would find for any other drug? The answer is that the placebo effect is about far more than just the pill; it is about the cultural meaning of the treatment. Pills don’t simply manifest themselves in your stomach; they are given in particular ways, they take varying forms, and they are swallowed with expectations, all of which have an impact on a person’s beliefs about his own health and, in turn, on outcome. Homeopathy is, for example, a perfect example of the value in ceremony.
I understand this might well seem improbable to you, so I’ve corralled some of the best data on the placebo effect into one place, and the challenge is this: see if you can come up with a better explanation for what is, I guarantee, a seriously strange set of experimental results.
First up, Blackwell (1972) did a set of experiments on fifty-seven college students to determine the effect of color—as well as the number of tablets—on the effects elicited. The subjects were sitting through a boring hourlong lecture and were given either one or two pills, which were either pink or blue. They were told that they could expect to receive either a stimulant or a sedative. Since these were psychologists, and this was back when you could do whatever you wanted to your subjects—even lie to them—the treatment that
all
the students received consisted simply of sugar pills, but of different colors.
Afterward, when they measured alertness—as well as any subjective effects—the researchers found that two pills were more effective than one, as we might have expected (and two pills were better at eliciting side effects too). They also found that color had an effect on outcome: the pink sugar tablets were better at maintaining concentration than the blue ones. Since colors in themselves have no intrinsic pharmacological properties, the difference in effect could only be due to the cultural meanings of pink and blue: pink is alerting; blue is cool. Another study suggested that oxazepam, a drug similar to Valium (which was once unsuccessfully prescribed by our doctor for me as a hyperactive child) was more effective at treating anxiety in a green tablet and more effective for depression when yellow.
Drug companies, more than most, know the benefits of good branding; they spend more on PR, after all, than they do on research and development. As you’d expect from men of action with large houses in the country, they put these theoretical ideas into practice, so Prozac, for example, is white and blue, and in case you think I’m cherry-picking here, a survey of the color of pills currently on the market found that stimulant medication tends to come in red, orange, or yellow tablets, while antidepressants and tranquilizers are generally blue, green, or purple.
Issues of form go much deeper than color. In 1970 a sedative—chlordiazepoxide—was found to be more effective in capsule form than pill form, even for the very same drug, in the very same dose; capsules at the time felt newer, somehow, and more sciencey. Maybe you’ve caught yourself splashing out and paying extra for ibuprofen capsules in the pharmacy.
Route of administration has an effect as well: saltwater injections have been shown in three separate experiments to be more effective than sugar pills for blood pressure, for headaches, and for postoperative pain, not because of any physical benefit of saltwater injection over sugar pills—there isn’t one—but because, as everyone knows, an injection is a much more dramatic intervention than just taking a pill.
Closer to home for the alternative therapists, the
British Medical Journal
recently published an article comparing two different placebo treatments for arm pain, one of which was a sugar pill, and one of which was a ritual, a treatment modeled on acupuncture. The trial found that the more elaborate placebo ritual had a greater benefit.
But the ultimate testament to the social construction of the placebo effect must be the bizarre story of packaging. Pain is an area where you might suspect that expectation would have a particularly significant effect. Most people have found that they can take their minds off pain—to at least some extent—with distraction, or have had a toothache that got worse with stress.
Branthwaite and Cooper did a truly extraordinary study in 1981, looking at 835 women with headaches. It was a four-armed study, in which the subjects were given either aspirin or placebo pills, and these pills in turn were packaged either in blank, bland, neutral boxes or in full, flashy, brand-name packaging. They found—as you’d expect—that aspirin had more of an effect on headaches than sugar pills, but more than that, they found that the packaging itself had a beneficial effect, enhancing the benefit of both the placebo and the aspirin.
People I know still insist on buying brand-name painkillers. As you can imagine, I’ve spent half my life trying to explain to them why this is a waste of money, but in fact, the paradox of Branthwaite and Cooper’s experimental data is that they were right all along. Whatever pharmacology theory tells you, that brand-named version
is
better, and there’s just no getting away from it. Part of that might be the cost; a recent study looking at pain caused by electric shocks showed that a pain relief treatment was stronger when subjects were told it cost $2.50 than when they were told it cost 10 cents. (And a paper currently in press shows that people are more likely to take advice when they have paid for it.)
It gets better—or worse, depending on how you feel about your worldview slipping sideways. Montgomery and Kirsch (1996) told college students they were taking part in a study on a new local anesthetic called trivaricaine. Trivaricaine is brown, you paint it on your skin, it smells like a medicine, and it’s so potent you have to wear gloves when you handle it: or that’s what they implied to the students. In fact, it’s made of water, iodine, and thyme oil (for the smell), and the experimenters (who also wore white coats) were using rubber gloves only for a sense of theater. None of these ingredients will affect pain.
The trivaricaine was painted onto one or other of the subjects’ index fingers, and the experimenters then applied painful pressure with a vise. One after another, in varying orders, pain was applied, trivaricaine was applied, and as you would expect by now, the subjects reported less pain, and less unpleasantness, for the fingers that were pretreated with the amazing trivaricaine. This is a placebo effect, but the pills have gone now.
It gets stranger. Sham ultrasound is beneficial for dental pain, placebo operations have been shown to be beneficial in knee pain (the surgeon just makes fake keyhole surgery holes in the side and mucks about for a bit as if she were doing something useful), and placebo operations have even been shown to improve angina.
That’s a pretty big deal. Angina is the pain you get when there’s not enough oxygen getting to your heart muscle for the work it’s doing. That’s why it gets worse with exercise: because you’re demanding more work from the heart muscle. You might get a similar pain in your thighs after bounding up ten flights of stairs, depending on how fit you are.
Treatments that help angina usually work by dilating the blood vessels to the heart, and a group of chemicals called nitrates are used for this purpose very frequently. They relax the smooth muscle in the body, dilating the arteries so more blood can get through (they also relax other bits of smooth muscle in the body, including your anal sphincter, which is why a variant is sold as “liquid gold” in sex shops).
In the 1950s there was an idea that you could get blood vessels in the heart to grow back, and thicker, if you tied off an artery on the front of the chest wall that wasn’t very important, but that branched off the main heart arteries. The idea was that this would send messages back to the main branch of the artery, telling it that more artery growth was needed, so the body would be tricked.