I Think You'll Find It's a Bit More Complicated Than That (15 page)

BOOK: I Think You'll Find It's a Bit More Complicated Than That
10.28Mb size Format: txt, pdf, ePub
ads

Where figures are given, they generally use the most dramatic and uninformative way of expressing the benefits: the ‘relative risk reduction’ is given, the same statistical form that journalists prefer – for example, ‘a 30 per cent reduction in deaths from breast cancer’ – rather than a more informative figure like the ‘number needed to screen’ – say, ‘two lives saved for every thousand women scanned’. Sometimes the leaflets even contain borderline porkies, like this one from Ontario: ‘There has been a 26 per cent increase in breast cancer cases in the last ten years,’ it said, in scary and misleading tones. This was roughly the level of over-diagnosis caused by screening over the preceding ten years during which the screening programme itself had been operating.

These problems with clear information raise interesting questions around informed consent, although seductive letters do increase uptake, and so save lives. It’s tricky: on the one hand, you end up sounding like a redneck who doesn’t trust the gub’mint, because screening programmes are often valuable. On the other hand, you want people to be allowed to make informed choices.

And the amazing thing is, in
at least one large survey
of five hundred people, even when presented with the harsh realities of the tests, people made what many would still think are the right decisions. Thirty-eight per cent had experienced at least one false-positive screening test; more than 40 per cent of these individuals described the experience as ‘very scary’, or ‘the scariest time of my life’. But looking back, 98 per cent were glad they were screened. Most wanted to know about cancer, regardless of the implications. Two thirds said they would be tested for cancer even if nothing could be done. Chin up.

How Do You Know?

Guardian
, 4 June 2011

Mobile phones ‘possibly’ cause brain cancer, according to a report this week from the IARC (International Agency for Research on Cancer),
part of the WHO
. This report has
triggered over 3,000 news articles
around the world. Like you, I’m not interested in marginal changes around small lifestyle risks for the risks themselves; but I am interested in the methodological issues they throw up.

First, transparency: science isn’t about authoritative utterances from men in white coats, it’s about showing your working. What does this report say? How do its authors reason around contradictory data? Nobody can answer those questions, because the report isn’t available. Nobody you see writing confidently about it has read it. There is
only a press release
. Nobody at the IARC even replied to my emails requesting more information.

This isn’t just irritating. Phones are a potential risk exposure where people can make a personal choice. People want information. It’s in the news right now. The word ‘possibly’ informs nobody. How can we put flesh on that with the research that is already published, and what are the
limits of the research
?

The crudest data you could look at is the overall rate of different brain cancers. This
hasn’t changed much
over time, despite an increase in mobile-phone use, but it’s a crude measure, affected by lots of different things.

Ideally, we’d look at individuals, to see if greater mobile use is correlated with brain cancer, but that can be tricky. These tumours are rare – about ten cases
in every 100,000
people each year – and that affects how you research them.

For common things, such as heart disease, you can take a few thousand people and measure factors you think are relevant – smoking, diet, some blood tests – then wait a few years until they get the disease. This is a
‘prospective cohort study’
, but that approach is much less useful for studying rare outcomes, like brain tumours, because you won’t get enough cases appearing in your study group to spot an association with your potential cause.

For rare diseases, you do a
‘retrospective case-control study’
: gather lots of cases; get a control group of people who don’t have the rare disease but are otherwise similar; then, finally, see if your cases are more or less likely to report being exposed to
mobile phones
.

This sounds fine, but such studies are vulnerable to the frailties of memory. If someone has a tumour on the left of their head, say, and you ask, ‘Which side did you mostly use your phone on ten years ago?’, they might think, God, yes, that’s a good point, and unconsciously be more likely to inaccurately remember ‘the left’. In one study on the relationship between mobile-phone use and brain tumours, ten people with brain cancer (but no controls) reported phone usage figures that worked out overall as more than twelve hours a day. This might reflect people misremembering the distant past.

Then there are other problems, such as time course: it’s possible that mobile phones might
cause brain cancer
, but through exposure over thirty years, while we’ve only got data for ten or twenty years, because these devices haven’t been in widespread use for long. If this is the case, then the future risk may be unknowable right now (although, to be fair, other exposures that are now known to cause a peak in problems after several decades, such as asbestos, do still have measurable effects only ten years after exposure). And then, of course, phones change over time: twenty years ago the devices had more powerful transmitters, for example. So we might get a false alarm, or false reassurance, by measuring the impact of irrelevant technology.

But lastly, as so often, there’s the issue of a large increase in a small baseline risk. The absolute worst-case scenario, from the
Interphone study
, is this: it found that phone use overall was associated with fewer tumours, which is odd; but very, very high phone use was associated with a 40 per cent increase in tumours. If everyone used their phones that much – an extreme assumption – and the apparent relationship is a genuine one, then this would still only take you from ten brain tumour cases in 100,000 people to fourteen cases in 100,000 people.

That’s what ‘possible’ looks like: the risk itself is much less interesting than the science behind it.

Anecdotes Are Great. If They Really Illustrate the Data

Guardian
, 29 July 2011

On
Channel 4 News
, scientists have found a new treatment for Duchenne’s muscular dystrophy. ‘A study in the
Lancet
today shows a drug injected weekly for three months appears to have reduced the symptoms,’ they say. ‘While it’s not a cure, it does appear to reduce the symptoms.’

Unfortunately, the
study shows no such thing
. The gene for making a muscle protein called dystrophin is damaged in patients with DMD. The
Lancet
paper shows that a new treatment led to some restoration of dystrophin production in some children in a small, unblinded study.

That’s not the same as symptoms improving. But Channel 4 reiterates its case, with the mother of two participants in the study. ‘I think for Jack … it maintained his mobility … with Tom, there’s definitely significant changes … more energy, he’s less fatigued.’

Where did these positive anecdotes come from? Dis-appointingly, they come from the
Great Ormond Street Hospital press release
(which was
tracked down online
by evidence-based policy wonk
Evan Harris
). It summarises the dystrophin results accurately, but then, once more, it presents an anecdotal case study going way further: ‘Our whole family noticed a marked difference in their quality of life and mobility over that period. We feel it helped prolong Jack’s mobility and Tom
has been considerably
less fatigued.’

There are two issues here. Firstly, anecdotes are a great communication tool, but only when they accurately illustrate the data. The anecdotes here plainly go beyond that. Great Ormond Street denies that this is problematic (though it has changed its press release online). I strongly disagree (and this is not, of course, the first time an academic press release has been suboptimal).

But this story is also a reminder that we should always be cautious with ‘surrogate’ outcomes. The biological change measured was important, and good grounds for optimism, because it shows the treatment is doing what it should do in the body. But things that work in theory do not always work in practice, and while a measurable biological indicator is a hint that something is working, such outcomes can often be misleading.

Examples are easy to find, and from some of the
biggest diseases in medicine
. The ALLHAT trial was a vast scientific project, comparing various blood-pressure and lipid-lowering drugs against each other. One part compared 9,000
patients on doxazosin
against 15,000 on chlorthalidone. Both drugs were known to lower blood pressure, to pretty much the same extent, and so people assumed they would also be fairly similar in their impact on real-world outcomes that matter, like strokes and heart attacks.

But patients on doxazosin turned out to have a
higher
risk of stroke, and cardiovascular problems, than patients on chlorthalidone – even though both lowered blood pressure – to such an extent that the trial had to be stopped early. Blood pressure, in this case, was not a reliable surrogate outcome for assessing the drug’s benefits on real-world outcomes.

This is not an isolated example. A
blood test called HbA1c
is often used to monitor progress in diabetes, because it gives an indicator of blood-glucose levels over the preceding few weeks. Many drugs, such as rosiglitazone, have been licensed on the grounds that they
reduce your HbA1c level
. But this, again, is just a surrogate outcome: what we really care about in diabetes are real-world outcomes like heart attacks and death. And when these were finally measured, it
turned out that rosiglitazone
– while lowering HbA1c levels very well – also, unfortunately, massively increased your risk of heart attack. (The
drug has now been suspended
from the market.)

We might all wish otherwise, but blood tests are a mixed bag. Positive improvements on surrogate biological outcomes that can be measured in a laboratory might give us strong hints on whether something works, but the proof, ultimately, is whether we can show an impact on patients’ pain, suffering, disability and death. I hope this new DMD treatment does turn out to be effective: but that’s not an excuse for overclaiming, and even for the most well-established surrogate measures and drugs, laboratory endpoints have often turned out to be very misleading. People writing press releases, and shepherding misleading patient anecdotes into our living rooms, might want to bear that in mind.

Six weeks after the piece above was published, the Great Ormond Street in-house magazine
RoundAbout
published a response from Professor Andrew Copp, Director of the Institute of Child Health. It seems to me that he has missed the point entirely – both on surrogate outcomes, and on the issue of widely reported anecdotal claims that went way beyond the actual data – but his words are reproduced in full below, so that you can decide for yourself.

The end of July saw the publication in
The Lancet
of an important paper by Sebahattin Cirak, Francesco Muntoni and colleagues in the Dubowitz Neuromuscular Centre at the ICH/Great Ormond Street Hospital (GOSH). Together with collaborators, the team provides evidence that a new technique called ‘exon skipping’ may be used in future to treat Duchenne muscular dystrophy (DMD).
DMD is a progressive, severely disabling neuromuscular disease that affects one in every 3,500 boys and leads to premature death. The cause is an alteration in the gene for dystrophin, a vital link protein in muscle. Without dystrophin, muscles become inflamed and degenerate, leading to the handicap seen in DMD. Patients with Becker muscular dystrophy also have dystrophin mutations, but their disease is much milder because the dystrophin protein, although shorter than normal, still functions quite well. By introducing short stretches of artificial DNA, it is possible to ‘skip’ over the damaged DNA in DMD cells and cause a shorter but otherwise functional protein to be made.
Previous studies showed this approach to work when the artificial DNA was injected directly into patients’ muscles. The
Lancet
study asked whether the therapy would also work by intravenous injection, a crucial step as it would not be feasible to inject every muscle individually in clinical practice. Of the 19 boys who took part, seven showed an increase in dystrophin protein in their muscle biopsies. Importantly, there were few adverse effects, suggesting the treatment might be tolerated long term as would be necessary in DMD.
The study has been welcomed with great optimism in many parts of the media. However, I was disappointed to see an article about the research appearing in the ‘Bad Science’ column of the
Guardian
, written by Ben Goldacre. While he does not criticise the research in the
Lancet
paper, Goldacre accuses Channel 4 and GOSH of misleading the public about the extent of the advance for patients. The GOSH press release included comments by a parent who felt her two boys had shown improvement in mobility and less fatigue after treatment. Although this was only two sentences within a much longer and scientifically accurate press release, the sentences were seized upon as an opportunity for bad publicity for GOSH.
Goldacre spoils his argument by claiming the improvement in dystrophin protein level to be ‘theoretical’. He says: ‘things that work in theory do not always work in practice’. As a scientifically trained journalist, it is sad to see him confusing theory with an advance that is manifestly practical – the missing protein actually returned in the muscles! Nevertheless, it is a reminder that there are those in the media who like to cast doubt on the work of GOSH, and we must continue to be careful when conveying what we do, in order not to be misunderstood.
BOOK: I Think You'll Find It's a Bit More Complicated Than That
10.28Mb size Format: txt, pdf, ePub
ads

Other books

The Christmas Train by David Baldacci
Initiation by Jessica Burkhart
The Heaven of Animals: Stories by David James Poissant
A Gamble on Love by Blair Bancroft
Joan Wolf by The Guardian
París era una fiesta by Ernest Hemingway