Read I Think You'll Find It's a Bit More Complicated Than That Online
Authors: Ben Goldacre
As so often, this is about transparency, which is ultimately the only source of authority in science: we want the methods and results of scientific research to be formally presented, and accessible by all, so that we can see what was done, and what they found. If a government report on anything relies substantially on unpublished and inaccessible research, then we are correctly concerned. In fact, just two weeks ago this column discussed how the key piece of evidence presented by the Home Office to justify retaining DNA from innocent people who have only ever been arrested and then released was an incompetently presented piece of unpublished, incomplete research.
Systematic reviews give you transparency, because from the very outset everyone is honest and open about what kinds of studies they will and won’t include. When organisations like the Soil Association greet the publication of a systematic review with accusations of cherry-picking, they’re not just wrong, they’re undermining the public’s understanding of one of the most important and simple new ideas from the past three decades of science.
That annoys me, so let me share a prejudice. Ultimately, when people talk about the health benefits of organic food, they’re not really talking about the health benefits of organic food. Wealthy society’s big love for organic farming is about something very different: a bundle of legitimate concerns about unchecked capitalism in our food supply, battery farming, corruptible regulators, and reckless destruction of the environment, where the producers’ costs do not reflect the true full costs of their activities to society, to name just a few. Every one of these problems deserves our full, individual attention.
But magic spells will not work. We cannot eradicate deceit from the pharmaceutical industry by buying homeopathic sugar pills; and we cannot solve the problems of unchecked capitalism in industrial food production by giving money to the £2 billion organic food industry in exchange for the occasional posh carrot.
As Far as I
Understand Thinktanks …
Guardian
, 7 June 2008
There has been a frightening decline in the quality of maths in reports complaining about the frightening decline in the quality of maths in Britain. ‘The Value of Mathematics’, by thinktank Reform, has received large quantities of flattering media coverage this week from
The Times
, the
Telegraph
, and even scored a second puff in the
Guardian
from Professor Marcus du Sautoy, Oxford’s Professor of the Public Understanding of Science. Their argument is simple: there is less maths around; people think it’s cool to be bad at sums; we suffer economically; these are bad things.
Here is a key, early, factual claim from Reform’s report: ‘About 40 per cent of mathematics graduates enter financial services.’ This, they say, is a good thing. Do we believe the number? The report references it to the front page of a website called prospects.ac.uk, which is a pretty big website. Chasing through the pages there, you will find ‘What Do Graduates Do?’, and then the maths page. There were 4,070 maths graduates in their sampling frame for the year 2006. Only 2,010 of those, however, are in UK employment (1.5 per cent are working abroad, and the rest are studying for a higher degree, or a teaching qualification, or are unemployed, or unavailable for employment, and so on).
Of those 2,010 – not 4,070 – 37.9 per cent are working as ‘Business and Financial Professionals and Associate Professionals’. So if we use maths: 2,010 × 0.379 = 761.79, and that divided by 4,070 gives us 0.1871, but let’s round up like the angry maths profs did. About 20 per cent of maths graduates enter financial services. Not 40 per cent. For a group of people complaining about the substitution of woolly modern notions like ‘relevance’ and ‘applied maths’ in place of high-end mathematical techniques, these maths profs aren’t very good at arithmetic.
They’re also not very good at basic applied maths. For example, they argue that if we simply had more people around with maths knowledge, then there would automatically be more (and more lucrative) jobs requiring that knowledge, which our new maths graduates would instantly take up. I’m not an economist, but I’m not sure labour markets work like that: there are lots of men in the north of England who are very good at mining, and nobody in a hurry to open new pits.
Unfortunately, in any case, even this aspect of the report seems to be marred by simple errors in applied arithmetic. The thinktank is worried that the loss of A-level mathematicians has resulted in lost earnings for the economy. If the number of maths candidates had remained constant, they say, there would have been an additional 430,700 over the period 1989 to 2007. In the adjacent table they say 430,031, but that’s the least of our worries. They go on to reason like this: ‘Each of these students would have earned an additional £3,080 per year due to the market premium on A-level mathematics, equating to £136,000 over their lifetime. The total gain to the economy over the period would have been over £9 billion.’
We’ll put aside the fact that the BBC said ‘A maths A-level puts on average an extra £10,000 a year on a salary [not £3,080], says Reform,’ because I can’t get £9 billion for that period with those numbers (I get £12 billion, assuming a linear decline). Even if I could, Reform are making assumptions that are hard to blindly accept: in particular, will the extra earnings for people with maths A-level really still hold, if more people have maths A-level? We could go on.
I’m happy to agree that maths is economically useful, that some people might want to avoid difficult school subjects, and that humanities graduates who think maths is uncool are bores. What I would like is someone who can be bothered to sit down and reinforce my prejudices without perpetrating crass errors of over-interpretation and getting the basic arithmetic wrong. I’ve never fully seen the point of them, but until now, I assumed that’s what thinktanks and Professors of the Public Understanding of Science are there for.
Meaningful Debates
Need Clear Information
Guardian
, 27 October 2007
Where do all those numbers in the newspapers come from? The Commons Committee on Science and Technology is taking evidence on ‘scientific developments relating to the Abortion Act 1967’.
Scientific and medical expert bodies giving evidence say that survival in births below the twenty-fourth week of pregnancy has not significantly improved since the 1990s, when it was only 10–20 per cent. But one expert, a Professor of Neonatal Medicine, says survival at twenty-two and twenty-three weeks has improved. In fact, he says survival rates in this group can be phenomenally high: 42 per cent of children born at twenty-three weeks at some top specialist centres. He has been quoted widely:
in the
Independent
and the
Telegraph
,
on Channel 4
and
Newsnight
, by Tory MPs, and so on. The figure has a life of its own.
In the media, you get one expert saying one thing, and another saying something else. Who do you believe? The devil is in the detail. One option is to examine the messenger. John Wyatt is a member of the Christian Medical Fellowship. He didn’t declare that when he went to give evidence. You don’t have to. He did declare it when asked.
Prof Wyatt has relevant research experience, but there were half a dozen other medics without any relevant background who submitted evidence (or their view of it) to the committee who, when asked if they had
anything to declare
, did mention membership of Christian or evangelical groups with an established position on abortion. I don’t care for an argument that rests on competing ideologies, so let’s look at Prof Wyatt’s evidence: because it has been hugely reported, and it goes against the evidence from a huge study called Epicure.
Epicure contains all of the data for every premature birth in England over the course of a year: one snapshot was taken in 1995, and one in 2006. Overall, it shows a modest improvement in survival for births at twenty-four weeks, but no significant improvement in the 10–20 per cent rate for births at twenty-two and twenty-three weeks.
For the next bit, you need to remember one simple piece of primary school maths. In the figure 3/20, 3 is the numerator and 20 is the denominator. If you have three survivors for every twenty births, then 15 per cent survive. For Epicure, the numerator is survival to discharge from hospital, and the denominator is all births where there is a sign of life, carefully defined.
There are two ways you could get a higher survival percentage. One would be a genuine increase in the number of babies surviving, an increase in the numerator: eight out of twenty live births survive, 40 per cent. But you could also see an increase in the survival percentage by changing the denominator. Let’s say, instead of counting as your denominator ‘all births where there is a sign of life in the delivery room’, you counted ‘all babies admitted to neonatal intensive care’. Now, that’s a different kettle of fish altogether. To be admitted to neonatal ICU, the doctors have to think you’ve got a chance. Often you have to be transferred from another hospital in an ambulance, and for that you really do have to be more well. Therefore, if your denominator is ‘neonatal ICU admissions’, your survival rates will be higher, but you are not comparing like with like. That may partly explain Prof Wyatt’s figure for a very high survival rate in twenty-three-week babies. But it’s not clear.
First, in his written evidence he said that the data was from a ‘prospectively defined’ study (where the researchers say in advance what they plan to collect). Then he was asked in the committee, when giving his oral evidence: ‘What was the denominator for that? Was that … 42 per cent survival at twenty-three weeks of all babies showing signs of life in the delivery room, or was it a proportion of those admitted to neonatal intensive care directly or by transfer?’ Prof Wyatt replied: ‘The denominator was all babies born alive in the labour ward in the hospital at UCL [University College London].’ This later turned out not to be true.
Then he was asked to send the reference for the claim. He did so. It was merely an abstract for an academic conference presentation three years ago. It did not contain the figures he was quoting. He then said he had done the raw figures on a spreadsheet, especially for the committee – bespoke, if you will – and sent them in. They are entered into the record as a memo, on 18 October 2007. They show new, different, but broadly comparable figures: 50 per cent survive at twenty-two weeks, then down to 46 per cent at twenty-three weeks, then up to 82 per cent at twenty-four weeks, then down again to 77 per cent at twenty-five weeks. (That bouncing around is because the raw numbers are so small that there is a lot of random noise.)
And the denominator? Prof Wyatt is clear: ‘I have provided the numbers and percentage of infants born alive at University College London Hospitals who survived to one year of age.’ The committee asked for clarification of this. Finally, on 23 October, another memo arrived from Prof Wyatt, entered into the record where all can read it. For the widely quoted 42 per cent survival rate at twenty-three weeks, Prof Wyatt admitted that the denominator was, in fact, all babies admitted to the neonatal intensive care unit. But in his new special analysis, giving this new ‘46 per cent survive at twenty-three weeks’ figure, the figures in the previous paragraph, he claimed the denominator was ‘all live births’. Has he undone a prospectively designed study, and retrospectively redesigned it? Or is this now a completely different source of data to the original reference?
I cannot blame Prof Wyatt for this, but his figure has taken on a life of its own. There may have been yet another mistake here, about the denominator. I don’t know. I’m quite prepared to believe that UCL may have unusually good results. But science is about clarity and transparency, especially for public policy. You need to be very clear on things like: what do you define as a ‘live birth’, how do you decide on what gestational age was, and so on. Even if this data stands up eventually, right now it is non-peer-reviewed, unpublished, utterly chaotic, changeable, personal communication of data, from 1996 to 2000, with no clear source, and no information about how it was collected or analysed. That would be fine if it hadn’t suddenly become central to the debate on abortion.
Guardian
, 3 November 2007
Parliamentary select committees are among the few places where you can see politicians sitting down and doing the kind of thing you’d actually want them to do, like thinking carefully about policy. This week the Science and Technology Committee delivered its report on scientific developments relating to the Abortion Act. This is, entirely for free, a fantastic piece of popular science writing on epidemiology. In particular, it’s a masterclass on spotting dodgy statistics, which is exactly what the committee received from anti-abortion activists.
Here is one example, on the question of abortion and breast cancer risk, in which a parliamentary document explains the importance of choosing the correct control group.
Dr Richards told us that ‘if you compare women who keep their pregnancy with those who have an induced abortion, those who have an induced abortion are more likely to get breast cancer later on’. This is the comparative group that Dr Brind favours and the result is expected, since carrying a first pregnancy to birth is protective against breast cancer. However, if you look at the rates of cancer between women who have had an abortion and those who have not had children, the effect disappears.
This is the bread and butter of science; a thing to behold. They give similarly rigorous and transparent treatment to the foetal pain people, the neonatal survival figures, and more.
Two Conservative MPs on the committee who favour tighter restrictions on abortion were unhappy with this report. They have issued their own
minority report
,
published as an appendix
to the main one. Does this differ in approach, or moral values? No, it differs in something much more simple: the quality of the science, the selectivity of the quoting, and the quality of the referencing.