13 Things That Don't Make Sense (24 page)

BOOK: 13 Things That Don't Make Sense
5.72Mb size Format: txt, pdf, ePub
ads

FOR
medicine, the placebo effect is a two-edged sword. Despite Hróbjartsson and Gøtzsche’s results, it seems undeniably useful—but
it also takes away our certainties. We can’t tell what the chemical constituents of a drug actually do to the biochemistry
of our bodies, because even the sight of the approaching needle starts to disturb the biochemical environment. It is, Benedetti
says, like the uncertainty principle of physics: anytime you measure something, you necessarily disturb it, so you can’t ever
be sure that your measurement is accurate. As a result, it seems that we may have to redesign drug trials.

Our slowly unfolding understanding of the placebo effect means we may need to reinterpret all our pharmaceutical data. In
some cases, clinical trial results will seem invalid, or will at least need to be taken with a pinch of salt. It has taken
decades to refine our clinical trial process and, with more money than ever in pharmaceuticals, pulling down that edifice
is not for the fainthearted. Though Colloca and Benedetti wrote that these revolutions in our understanding of placebo “will
lead to fundamental insights into human biology,” it is surely in this radical overhaul of medicine that the anomaly of placebo
will create a Kuhnian paradigm shift.

Testing drugs has progressed enormously since Franklin’s day. The modern apogee is the
randomized controlled trial
(
RCT
), where a large group of people is split into (usually two) groups on an entirely random basis. One group will receive the
drug; the other will receive something that seems the same but is entirely inert: the placebo. The idea of randomization is
to create as little natural difference between the groups as possible, thus maximizing the chances of seeing some effect the
drug produces that the placebo doesn’t. Systematic effects, such as gender, age, preexisting health issues, a natural swing
into or out of good health, should be the same for both groups. Any major differences in outcome between the groups, then,
should be due to the drug.

There are other factors at work, though, which is where blinding comes in. Obviously, none of the patients should know whether
they are getting the drug under test or the placebo. This single blinding isn’t enough; the people giving out the drugs might
offer some nonverbal or subconscious clues to the patients. Hence the “double-blinding”: the doctors and nurses involved also
ought not to know which are the placebo pills.

Such a double-blinded RCT is considered the best way to tell whether a drug is effective or not, but there are still more
refinements that can improve things. Adding a third “arm” to the study—a group that receives no treatment whatsoever—can help.
Patients are most likely to seek a doctor’s help when their symptoms are most acute; any follow-up is likely to encounter
improvements in health. A group that has received no treatment will help weed out this “regression to the mean” effect. Similarly,
there is the problem of “natural history”: the normal variation in symptoms. A headache comes and goes, for example; if a
patient takes a placebo just before a spontaneous swing toward less pain happens, the reporting could end up skewed. Observing
a no-treatment control group should enable this effect to be taken into account.

Nevertheless, there are subtle effects that no amount of care seems to nullify. Just telling patients they
might
get a placebo alters the outcome. Telling them the likely potency of the drug will also skew things. A patient’s own assessment
of whether he is in the placebo or the active arm of the trial affects his response; two trials—one in Parkinson’s patients,
one in acupuncture—have been reported where the “perceived assignment” had more effect on the patients than the treatment
on offer.

Because of all these factors (and there are others), the National Institutes of Health is sponsoring many different research
groups to find a new way to test the efficacy of drugs. One group, led by researchers from Harvard Medical School, are attempting
a new style of trial using “wait lists” to give them a control group that receives no treatment. Another way forward is through
hidden treatments: covert versus overt treatment. The level of placebo response—and thus the effectiveness of the drug—can
be determined by the difference in outcome between the group that knew they were getting the drug and the group that didn’t
know they were getting it.

So far, these trials have provided rather striking outcomes. An openly administered dose of the painkiller Metamizol, for
instance, relieved postoperative pain much better than a hidden dose; all of the open-administration group’s relief was from
placebo. When researchers injected a different set of patients with a hidden dose of the painkiller buprenorphine, this did
have a pain-reducing effect—though not as much, or as fast, as giving it through an overt injection. Though buprenorphine
works, it works better when used in conjunction with the placebo effect. This kind of trial, which allows physicians to see
the total effect of drug plus placebo, can help them give a reduced dose of potentially toxic or addictive substances.

Skeptics might argue that pharmaceutical companies will fight anything that casts their products in a dubious light—especially
if it results in people using lower doses across the board—but the truth is that, for many drug companies, reliable information
on the placebo effect can’t come soon enough. To pass muster, a drug must outperform placebo. But a 2001 study of antidepressant
drug trials showed that while drug efficacy is rising, placebo rates are rising faster. It’s almost ironic; the factors behind
this are many and varied, but a significant contributor is our society’s knowledge of—and belief in—the power of medicines.
The pharmaceutical industry’s palpable success means that unless something radical happens, it could soon be, like the Red
Queen, running to stand still.

The other big opportunity for a paradigm shift is in the clinical scenario: should we ignore Hróbjartsson and Gøtzsche and
encourage doctors to keep lying to us about their treatments?

Health-care providers may not like the idea that the bold future of medicine lies in more exploitation of the healing power
of the imagination, but if doctors are serious about preserving your health—maybe even saving your life—they might have to
swallow this bitter pill. Not because placebo is a magic bullet, but for precisely the opposite reason. For all the marvels
of the placebo effect, perhaps the most important thing is to recognize its limits. The placebo effect will not cure cancer.
It does not slow the onset of Alzheimer’s or Parkinson’s. It does not make a malfunctioning kidney function again. It does
not protect against malaria. Patients are already flocking to “complementary” therapists who unwittingly embrace placebos.
The same patients are probably unaware that their family doctor could quite intentionally—some would say “cynically”—embrace
these same treatments too, where appropriate.

It might be a disaster if they don’t. The danger comes when the complement part of complementary disappears, and patients
visit practitioners offering only “alternative” treatments. If the patient’s condition is simply not placebo responsive—even
if many of the symptoms are—that could be life-threatening. Get the placebo out in the open, find a way to make it an acknowledged
tool in the doctor’s armory, and we could save lives by keeping patients within the fold of efficacious, rational medicine.
Just as long as we admit that, for the moment at least, it’s not quite as rational as we’d like.

And that brings us to our last subject. It is, to many minds, not qualified to stand alongside these others. However, we have
just raised questions about the placebo effect and the clinical trial, and these both have a bearing on the claims made for
science’s least favorite anomaly: homeopathy.

13

HOMEOPATHY

It’s patently absurd, so why won’t it go away?

A
n insightful mind once remarked that historians labor under a delusion: they think they are describing the past, when in fact
they are explaining the present. It must be doubly true of the historians of science. Time and again, going through these
anomalies, we have had to dig into history in order to understand what is happening in contemporary science, and where its
future might lie. With our final anomaly, it turns out that the insight is particularly powerful.

Homeopathy, invented in the late 1700s, is now more popular than ever. According to the World Health Organization, it now
forms an integral part of the national health-care systems of a huge swath of countries including Germany, the United Kingdom,
India, Pakistan, Sri Lanka, and Mexico. At London’s Royal Homeopathic Hospital, part of the United Kingdom’s national health
service, the staff numbers a staggering six thousand. Forty percent of French physicians use homeopathy, as do 40 percent
of Dutch, 37 percent of British, and 20 percent of German physicians. In 1999 a survey revealed that 6 million Americans had
used homeopathic treatments in the previous twelve months. The big question is, why? An assessment of homeopathy using the
criteria of known scientific phenomena says it simply
cannot
work; no wonder Sir John Forbes, the physician to Queen Victoria’s household, called it “an outrage to human reason.”

Although there are several different approaches, homeopathy generally involves first finding a cure by the
principle of similars
, which says that the remedy should be of a substance known to create the very symptoms the patient is already suffering.
Then that remedy is diluted in water or alcohol to the point where the solution handed to the patient contains no molecules
of the original remedy. Nonetheless, it has been “potentized” by repeated shaking or banging with each dilution—a process
known as
succussion
. In fact, homeopaths say, this ultradilute solution is more potent in curing ailments than the original undiluted substance.

It sounds like a ridiculous idea and, to most scientific minds, it is. The statistics of dilution make it plain why. A typical
homeopathic dilution is done in ratios of one part of the substance to ninety-nine parts alcohol or water (depending on whether
the substance is soluble in water). This process is repeated—a dilution of one part of the original solution to ninety-nine
parts water or alcohol—again and again. It’s quite normal to do this thirty times—this is called a
30C dilution.
That means, if you started by dissolving a tiny amount of your remedy in around fifteen drops of water, you would end up with
the original substance diluted in a volume of water fifty times bigger than the Earth. The big scientific problem with this
is that when the homeopathic pharmacist sells you a few milliliters of this remedy, the math of chemistry tells you there
is virtually no chance that it contains a single molecule of the original substance.

If you know the weight of a sample of some chemical—let’s say carbon—the basics of high school chemistry tell you how many
atoms you have in your sample. A gram of carbon, for instance, contains 5 x 1022 atoms. That sounds like a lot—and it is:
it’s 5 followed by twenty-two zeroes. In a 30C homeopathic dilution, however, there’s not a lot left; if you take fifteen
drops of liquid, you’ll have no more than one ten-millionth of an atom. And since you can’t split the carbon atom up (at least,
not this way), it’s safe to say you’ve got no carbon in there. In standard practice, medicinal effects come through interaction
with the body’s biochemistry, which means you need molecules of the remedy to be present in the body. With homeopathy, there’s
nothing. By any laws known to science, the remedy cannot interact with the biochemistry of your body in any meaningful way.

Samuel Hahnemann, the founding father of homeopathy, knew this, though; it’s not about chemistry, he said, but about the “energy”
of the remedy being passed into the water. Since this “energy” is not known to science, the obvious conclusion is that if
a homeopathic remedy has an effect, it can be no better than placebo.

The first scientific counter to this point of view came from the laboratory of French immunologist Jacques Benveniste. In
1988 Benveniste convinced the journal
Nature
to publish the details of an experiment that showed water was permanently altered by molecules that had once been dissolved
in it. The publication was on the condition that a rerun of the experiments be carried out in independent laboratories. That
was done, in Marseille, Milan, Toronto, and Tel Aviv. After publication (with disclaimers),
Nature
requested that the experiments be done again, this time in the presence (and under the intense scrutiny) of three independent
witnesses.
Nature
’s then-editor John Maddox, the magician and professional skeptic James Randi, and Walter Stewart, a chemist and an expert
on scientific fraud, spent a week in Benveniste’s Paris lab. The full tale is an extraordinary one; the short version is simply
that the visitors discovered how Benveniste had been duped by his assistant, who was cherry-picking data to support her belief
in homeopathic medicine.

Nature
published a critique of the original paper. Benveniste fought back, citing a McCarthy-like witch hunt, but his goose was cooked.
The following year, his employer, the French National Institute of Health, criticized him for credulousness, cavalier reporting
of his results, and abuse of his scientific authority. Two years after the
Nature
fiasco began, Benveniste was sacked.

That, essentially, was that—until Madeleine Ennis got involved. Ennis, a professor of immunology at Queens University, Belfast,
says she was a hard-nosed skeptic of homeopathy and the Benveniste work. When she expressed this in the face of a published
homeopathic trial, a manufacturer of homeopathic remedies asked her to join a team that would make another attempt to replicate
that result. She agreed, expecting to add to the evidence against homeopathy. After the end of the trial, she declared herself
“incredibly surprised” by the result. Quoted in the
Guardian
, she says, “Despite my reservations against the science of homeopathy, the results compel me to suspend my disbelief and
to start searching for a rational explanation for our findings.”

The trial, which was essentially a replication of Benveniste’s experiment, took place in four different laboratories in Italy,
Belgium, France, and Holland. Ennis’s skepticism wasn’t the only safeguard. The homeopathic solutions (and the controls) were
prepared by three independent laboratories that made no other contribution to the trial. Inside those solutions were—or rather,
had been—molecules of histamine.

Anyone who suffers from hayfever knows the power of histamine: it’s an immune system response that produces hives, pain, itching,
swelling, constriction of breathing, runny nose, and streaming eyes. All that, from some tiny molecules that form a small
part of your bloodstream. Every drop of blood contains somewhere in the vicinity of 15,000 white blood cells; around 150 of
those cells are known as
basophils
, and inside these basophils, contained in tiny granules, is the histamine.

Histamine has a strong effect on its basophil containers. After they release the histamine, its presence in their environment
stops them from releasing any more. This effect was central to Ennis’s experiment.

The labs that prepared the ultradilute histamine solutions sent test tubes of water and test tubes of dilute histamine to
the labs carrying out the experiment. The histamine dilution was at the kind of level homeopaths routinely use, where there
would have been no molecules of the substance in the vials. There was no way to tell which was water and which was the homeopathic
solution. In the experiment, the researchers stained basophil granules blue, then put these colored granules into the test
tubes, along with a substance called
anti-immunoglobulin E
or
algE
. AlgE causes a
de-granulation
reaction, in which the color disappears and the granules release histamine.

In water, this is exactly what happened. But when the researchers put the colored granules and the algE into the ultradilute
histamine solution, the de-granulation didn’t happen. The “ghost” presence of histamine in the homeopathic solution was enough
to stop the process in its tracks.

The results were statistically significant at three of the centers. The fourth center had a positive result: the histamine
solution did suppress de-granulation more than the pure water, but it was not different enough to count.

Ennis was not satisfied by the results; there could have been bias in identifying which basophils still had their blue color
because the researchers did it by eye. So she demanded they make a different measurement, one that could be automated. That
way, a believer among them would not be able to skew the results—even unconsciously. She had the basophils “tagged” with an
antibody that would make them glow if their histamine secretion was being suppressed. A light-sensitive probe then did the
counting. The result was the same.

The record of the experiment, published in
Inflammation Research
, concluded that “histamine solutions, both at pharmacological concentrations and diluted out of existence, lead to statistically
significant inhibition of basophil activation by anti-immunoglobulin E.”

Not that Ennis quite puts her own results beyond question. It was, she admits, a small study, and no one has yet replicated
its findings. In one famous attempt, a team of scientists failed to replicate Ennis’s experiment for a BBC
Horizon
program. Ennis appeared on the show, but she later distanced herself from the experiment, saying there were a series of flaws
in the protocol. A study by Adrian Guggisberg and colleagues at the University of Bern also failed to find any effects from
homeopathic histamine dilutions. The Swiss team’s analysis of protocols and results, published in
Complementary Therapies in Medicine
in 2005, found that small variations in the experimental setup could lead to significantly different outcomes; there were
all kinds of things that could affect the experiment, such as the temperature at which the basophils were prepared, and how
long in advance the homeopathic solutions were prepared.

Homeopaths will certainly cry “aha” at one of the Bern study’s main observations: the results “might depend on inter-individual
differences of blood donors,” according to the paper’s conclusions. The idea that homeopathy works on a case-by-case basis,
that a remedy will produce healing effects in some people and not in others, has been the homeopath’s primary excuse when
confronted with negative results in clinical trials of homeopathic remedies. Almost every time a homeopathic medicine fails
to register an effect, a representative of homeopathy will respond by saying homeopathic prescription is a complex process;
symptoms have to be considered in the light of all other aspects of the personality and physiology, and the right remedy for
an ailment will be dependent on a large number of factors. Ask a homeopath to prescribe for an ear infection, say, and she’ll
ask, Which ear? Since the body isn’t symmetrical—the liver and the heart, for example, lie away from the center line and,
unlike the kidneys, have no mirror organ—ailments affecting one side of the body will have a different nature from ailments
affecting the other. Even if your two ears do look the same.

To a scientific mind, that just comes across as an untestable waffle. Which is why, in the end, almost every scientific mind
says homeopathy can’t work. Even when that scientific mind acknowledges evidence to the contrary does seem to exist.

In his book
Placebo
, Dylan Evans attributes any success of homeopathy to the placebo effect. However, he also admits that a 1997 meta-analysis
published in the
Lancet
shows it is, on average, significantly more effective than a placebo. How does Evans square this circle? By saying that “it
would be foolish indeed to cast aside the whole of physics, chemistry and biology—supported, as they are, by millions of experiments
and observations—just because a single study yields a result that conflicts with their principles.” The University of Maryland
skeptic Robert L. Park uses the same argument. “If the infinite-dilution concept held up, it would force a re-examination
of the very foundations of science,” he says.

Is this true? If ultradilute solutions can have effects on biology, will this send science back to the drawing board? No.
Science works; millions of experiments and observations can be explained using scientific principles. None of those results
are changed if homeopathy turns out to be right. Why? Because none of those millions of experiments and observations has told
us everything we would like to know about the microscopic properties of water.

WE
know very little about liquids. Solids are easy; for decades it has been possible to probe the structure of solids using techniques
such as X-ray diffraction. That is how Francis Crick, James Watson, and Rosalind Franklin worked out the structure of DNA;
they bounced X-rays off the crystal and interpreted the resulting regular X-ray pattern to reveal its regular arrangement
of atoms. They key word here, however, is
regular
. Liquids aren’t regular, and we have no way of probing an irregular microscopic structure.

Chemists assume that in the absence of external influences, the structure is likely to be similar all through a liquid; the
chemical bonds should surely arrange themselves so there’s minimum stress in the setup. But what happens at fluctuating temperatures?
Or if there are regions of the liquid under high pressure? Or in electromagnetic fields? Can water in a jug exist in fairly
neat order in some regions and clumped messily in others? Does it interact with the molecules in the glass walls of the jug?
We don’t know.

BOOK: 13 Things That Don't Make Sense
5.72Mb size Format: txt, pdf, ePub
ads

Other books

Firewalk by Anne Logston
The Crush by Sandra Brown
Awakening by Cate Tiernan
Leann Sweeney by The Cat, the Quilt, the Corpse
The Boys Club by Angie Martin
To Be Queen by Christy English
The Vampire's Photograph by Kevin Emerson
Double Dragons by Bolryder, Terry