I Think You'll Find It's a Bit More Complicated Than That (4 page)

BOOK: I Think You'll Find It's a Bit More Complicated Than That
8.77Mb size Format: txt, pdf, ePub

So the future is bright. And if you’re one of the teachers who stopped a child’s essay from being published because it dared to challenge your colleagues for promoting the ludicrousness of Brain Gym, then really: shame on you.

Existential Angst
About the Bigger Picture

Guardian
, 21 May 2011

Here’s no surprise: beliefs which we imagine to be rational are bound up in all kinds of other stuff. Political stances, for example, correlate with various personality features. One
major review in 2003
looked at thirty-eight different studies, containing data on 20,000 participants, and found that overall, political conservatism was associated with things like death anxiety, fear of threat and loss, intolerance of uncertainty, a lack of openness to experience, and a need for order, structure and closure.

Beliefs can also be modified by their immediate context. One
study from 2004
, for example, found that when you make people think about death (‘Please briefly describe the emotions that the thought of your own death arouses in you’) they are more likely to endorse an essay discussing how brilliant George W. Bush was in his response to 9/11.

A
new study
looks at intelligent design, the more superficially palatable form of creationism, promoted by various religious groups, which claims that life on earth is too complex to have arisen through evolution and natural selection. Intelligent design implies a reassuring universe, with a supernatural creator, and it turns out that if you make people think about death, they’re less likely to approve of a Richard Dawkins essay, and more likely to rate intelligent design highly.

So that’s settled: existential angst drives us into the hands of religion. Rather excellently, the effect was partially reversed when people also read a Carl Sagan essay about how great it is to find meaning in the universe for yourself using science. It’s perfect. I love this stuff. I love social science research that reinforces my prejudices. Everybody does.

But that’s where I start to fall down. If I like these results, then lots of other people will like them too, whether it’s the academic psychologists doing the research, the statisticians they collaborate with, the academic journal editors and reviewers who decide whether or not the paper gets an easy ride into print, the press officers who decide whether or not to shepherd its findings towards the public, or even, finally, the bloggers and journalists who write about it. At every step, there is room for fun results to get through, and for unwelcome results to fall off the radar.

This isn’t a criticism of any individual study. Rather, it’s the angst-inducing context that surrounds every piece of academic research that you read: a paper can be perfect, brilliantly well-conducted, and yet there’s no way of knowing how many negative findings go missing. For all we know, we’re just seeing the lucky times the coin landed heads up.

The scale of the academic universe is dizzying, after all. Our
most recent estimate
is that there are over 24,000 academic journals in existence, 1.3 million academic papers published every year, and over 50 million papers have been published since scholarship began.

And for every one of these 50 million papers there will be unknowable quantities of blind alleys, abandoned experiments, conference presentations, work in progress seminars, and more. Look at the vast number of undergraduate and masters dissertations that had an interesting finding, and got turned into finished academic papers; and then think about the even vaster number that didn’t.

In medicine, where the stakes are tangible, systems have grown up to try to cope with this problem: trials are supposed to be registered before they begin, so we can notice the results that get left unpublished. But the
systems are imperfect
, and pre-registration is very rarely done, even in medical research, for anything other than trials.

We are living in the age of information, and vast tracts of data are being generated around the world, on every continent and on every question. A £200 laptop will let you run endless statistical analyses. The most interesting questions aren’t around individual nuggets of data, but rather how we can corral it to create an information architecture which serves up the whole picture.

The
Glorious Mess
of Real Scientific Results

Guardian
, 6 November 2010

Popular science is often triumphalist, presenting research as a set of completed answers, when in reality much of what gets published makes a glorious, necessary mess.

Here is an example.
Solomon Asch’s legendary studies
from the 1950s on conformity are among my favourite experiments of all time. Some people in a room are asked to judge the length of a line; all but one are stooges, and they unanimously assert what is obviously an incorrect answer. The one true, unsuspecting experimental subject conforms to the majority view, despite knowing that it’s incorrect, about a third of the time.

This is a chilling result that feels just right, and over the past half-century researchers have replicated the study over a hundred times in seventeen countries, allowing
hints of patterns
to be spotted in the results. One analysis of US studies found that conformity has declined since the 1950s. Another found that ‘collectivist’ countries tend to show higher levels of conformity than individualist ones.

This month the
International Journal of Psychology
published
a new variant
. Instead of one real subject in a room full of stranger stooges, they used polarising glasses – the same technology used to present a different image to the left and right eye for 3D films – to show participants different images on the same screen, at the same time, in the same room. This meant that friends could disagree, legitimately, and so exert social pressure, but without faking it.

The results were problematic. Overall, sometimes the minority people did conform to peer pressure, giving incorrect answers. But when the results were broken down, women did conform, a third of the time, but men did not. This poses a problem. Why were the results of this study different from the original study?

It could be that the subjects were different. The Asch experiments were only conducted in men, and they did conform. Perhaps modern Japanese undergraduates are different from 1950s US undergraduates (although cultural and generational differences have not previously been shown to be so large that they abolish the conformity effect completely).

It could be that the original task, where subjects had to judge the length of a line, was slightly different. But if anything, the task in the new experiment was harder than in the original, because the polarising glasses required that extra visual noise be added in; and if judgements were trickier, and therefore closer calls, then you might expect that conformity would increase, rather than decrease.

Or it could be that the relationships were different. Perhaps conforming effects are less pronounced among people who know each other compared to an experiment with a room full of stranger stooges: perhaps you feel more comfortable disagreeing with friends. This would be an important answer, if true, because when we extrapolate from the lab to the everyday, we’re probably more interested in conformity effects among acquaintances, because that’s what happens in a real community.

Maybe these questions will be resolved with a new experiment – you could probably design one yourself that would discriminate between the different possible explanations – but that will depend on whether someone is interested enough, and whether they can get the money and the time. Perhaps the paper will sink like a stone, and be ignored or overlooked, as
sometimes happens with uncomfortable data
.

But what you should know is this: alongside the triumphalism, and the answers, in reality, grey and conflicting results like these run deep in the research literature. They’re not an aberration, or a disappointment; in fact they are arguably the glorious norm, in the noise of
over 20,000 academic journals
, publishing
well over a million articles
every year. Alongside the giants, and the clean easy answers, challenging and ambiguous findings like these are what science is really made of.

Nullius in Verba
1

Not in the
Guardian
, 26 June 2010

Here is some pedantry: I worry about data being published in newspapers rather than academic journals, even when I agree with its conclusions. Much like Bruce Forsyth, the Royal Society has a catchphrase:
Nullius in verba
, or ‘On the word of nobody’. Science isn’t about assertions on what is right, handed down from authority figures. It’s about clear descriptions of studies, and the results that came from them, followed by an explanation of why they support or refute a given idea.

Last week the
Guardian
ran a major series of articles on the mortality rates after planned abdominal aortic aneurysm repair in different hospitals. Like many previously published academic studies on the same question, they discovered that hospitals which perform the operation less frequently have poorer outcomes. I think this is a valid finding.

The
Guardian
pieces aimed to provide new information, in that they did not use the Hospital Episodes Statistics, which have been used for much previous work on the topic (and on the NHS Choices website, where they are used to rate hospitals for the public). Instead they approached each hospital with a Freedom of Information Act request, asking the surgeons themselves for the figures on how many operations they performed, and how many people died.

Many straightforward academic papers are built out of this kind of investigative journalism work, from early epidemiology research into occupational hazards, through to the famous
recent study hunting down
all the missing trials of SSRI antidepressants that companies had hidden away. It’s not clear whether this FOI data will be more reliable than the Hospital Episodes numbers – ‘Discuss the
strengths and weaknesses of the HES dataset
’ is a
standard public health exam
question – and reliability will probably vary from hospital to hospital. One unit, for example, reported a single death after ninety-five emergency AAA operations on FOI request, when on average about one in three people in the UK die during this procedure, and that suggests to me that there may be problems in the data. But there’s no doubt that this was a useful thing to do, and there’s no doubt that hospitals should be helpful and share this information.

So what’s the problem? It’s not the trivial errors in the piece, although they were there. The main
Guardian
article says there are ten hospitals with over 10 per cent mortality, but in the data there are only seven. It says twenty-three hospitals do over fifty operations a year, but looking at the data there are only twenty-one.

But here’s what I think is interesting. This analysis was published in the
Guardian
, not an academic journal. Alongside the articles, the
Guardian
published its data, and as a long-standing campaigner for open access to data, I think this is exemplary. I downloaded it, as the
Guardian
webpage invited, and did a quick scatter plot, and a few other things. I couldn’t see the pattern for greater mortality in hospitals that did the procedure infrequently. It wasn’t barn door. Others had the same problem. I received a trickle of emails from readers who also couldn’t find the claimed patterns (including a Professor of Stats, if that matters to you). Jon Appleby, Chief Economist on Health Policy at the King’s Fund, posted on
Guardian
Comment is Free saying that he
couldn’t find the pattern either
.

The journalists were also unable to tell me how to find the pattern. They referred me instead to Peter Holt, an academic surgeon who’d analysed the data for them. Eventually I was able to piece together a rough picture of what had been done, and after a few days, more details were posted online. It was a pretty complicated analysis, with safety plots and forest plots. I think I buy it as fair.

So why does it matter, if the conclusion is probably valid? Because science is not a black box. There is a reason why people generally publish results in academic journals instead of newspapers, and it’s got little to do with ‘peer review’ and a lot to do with detail about methods, which tell us
how you know
if something is true. It’s worrying if a new data analysis is published only in a newspaper, because the details of how the conclusions were reached are inaccessible. This is especially true if the analysis is so complicated that the journalists themselves did not know about it, and could not explain it, and this transparency is especially important if you’re seeking to influence policy. The information needs to be somewhere.

Open data – people posting their data freely for all to re-analyse – is the big hip new zeitgeist, and a vitally important new idea. But I was surprised to find that the thing I’ve advocated for wasn’t enough: open data is sometimes no use, unless we also have open methods.

Is It OK
to Ignore Results from People You Don’t Trust?

Guardian
, 6 March 2010

If the media were actuarial about drawing our attention to the causes of avoidable death, your newspapers would be filled with diarrhoea, Aids and cigarettes every day. In reality we know this is an absurd idea. For those interested in the scale of our fascination with rarity, one piece of research looked at a
three-month period in 2002
and found that 8,571 people had to die from smoking to generate one story on the subject from the BBC, while there were three stories for every death from vCJD.

Other books

The Andromeda Strain by Michael Crichton
Home from the Vinyl Cafe by Stuart McLean
Tengu by Graham Masterton
The White Dragon by Resnick, Laura
The Gate by Bob Mayer
The Wolves of the North by Harry Sidebottom
Chances Are by Michael Kaplan