When Science Goes Wrong (42 page)

Read When Science Goes Wrong Online

Authors: Simon Levay

Tags: #Non-Fiction, #Science

BOOK: When Science Goes Wrong
5.04Mb size Format: txt, pdf, ePub

I asked Ninov about the allegations that he had created spurious data at GSI before he came to Berkeley. Ninov denied it, and suggested that the Germans’ statements were made in response to pressure from Ken Gregorich, who needed something to bolster the believability of his own allegations. Ninov criticised the Germans for cutting off communications with him, and was particularly bitter that he was not invited to participate in the choice of a name for element 111, which he helped discover. (The final choice was roentgenium.)

The persecution continued after he was ousted from Berkeley, Ninov said. An anonymous caller, whom he suspected was Augusto Macchiavelli or one of his colleagues, called the administration at the University of the Pacific and complained about his presence there. As a result, Ninov said, his appointment there was not renewed and he had to seek work at a different institution. He would not tell me where he was currently working, because he feared further persecution of the same kind. He did tell me that he had put the Berkeley episode behind him, however. ‘I’ve just moved on with my life and left the little animals to do what they want,’ he said.

 

 

Although Ninov’s Berkeley colleagues experienced direct injury to their reputations as a result of being associated with an apparently fraudulent study, their expressed reactions have been more of puzzlement than anger. ‘He had nothing to gain,’ said Loveland. ‘He was a very successful scientist.’ Quite a few people have brought up possible medical explanations – perhaps something related to a head injury that Ninov suffered years ago. ‘There were a lot of discussions on whether there was an issue of painkillers involved, or issues of mental illness,’ said Loveland. And he recounted something that seemed to strengthen such ideas in his own mind. ‘I remember sitting out on the patio of the Cyclotron and asking him, “Hey, Victor, what’s going on?” He spun off some comments to me about conspiracies, and I thought, Oh dear, we have a real problem here, when people start talking about conspiracy theories and so forth.’

Both Ninov and his wife (in a brief conversation I had with her) strongly rejected such ideas. (‘Painkillers?’ retorted Ninov. ‘Aspirin, maybe.’) Both of them interpreted these speculations as indications of malevolence on the part of the people who expressed them. To me, they seemed more like attempts on the part of Ninov’s onetime colleagues to construct explanations that removed some of the blame from a person they had once liked and admired.

One idea that many writers have stressed when considering scientific fraud is that the perpetrator fabricates data to prove an idea that he or she is already convinced is correct: this way, the perpetrator is unlikely to be found out, because future genuine findings will confirm the fraudulent ones. The spurious data in the GSI studies could be taken in support of this idea. The first apparently fraudulent decay chain only cropped up after one genuine chain was detected. Thus Ninov, presuming that he was the perpetrator, might have thought that he was merely hurrying the process of discovery along a little by adding a spurious chain. Once that fraud passed muster, perhaps committing further frauds became easier – maybe even addictive.

One thing stuck in my mind that Ninov told me in passing. ‘I was never interested in nuclear chemistry,’ he said. At first, I thought that this was just a bitter comment on a career that had culminated in such ignominy. Then I wondered. Ninov clearly has an exceptionally brilliant and restless mind. Could his real passion have been not to discover things about the natural world, but to pursue ideas, to solve intellectual challenges – the harder the better? And then I recalled that, as far as the superheavy elements were concerned, there was nothing to discover in the natural world. Those elements don’t exist, they have to be created in the image of an idea about how matter should behave. So did Ninov, for all his labours on the detectors and the data-analysis software, see the challenge more in terms of philosophy than science? We may never know, but Walter Loveland holds out hope for an answer. ‘It would be interesting at some point in my life to sit down with Victor over a beer and talk candidly for a while,’ he said. ‘I don’t know what I would hear.’

Several weeks after my interview with Ninov, I received a message that shone an intriguing new light on the story. In the interview I had asked Ninov to suggest a scientist I might talk to who would support his point of view. He mentioned his former graduate advisor at GSI, Peter Armbruster, himself a renowned element hunter.

But Armbruster did not support Ninov’s point of view. In an email that he sent me in January 2007, he agreed with his German colleagues that Ninov fabricated two decay chains while he was at GSI. ‘I certainly feel deceived, and I can in no way justify what he has done,’ Armbruster wrote.

Perhaps more telling from a psychological perspective was this detail: Armbruster told me that Ninov’s first fabricated decay chain occurred at 11:17 a.m. on November 11, 1995. November 11 marks the beginning of the south German carnival season known as
Fasching
, when people like to play all kinds of pranks. In Germany, the number 11 (elf) is known as
die närrische Zahl
– the fool’s number. For this reason, the exact beginning of
Fasching
is
am elften elften elf Uhr elf
, which is to say ‘at 11:11am on November 11’. Thus Ninov’s first spurious decay chain occurred just six minutes after the official opening of Germany’s practical-joke season.

‘I suppose this chain… was composed by Victor Ninov as a joke for
Fasching
,’ wrote Armbruster. ‘Victor must have been very surprised that it was accepted and published. This success certainly encouraged Victor to go on, playing games with the group.’ Of course, there is no independent evidence that this was Ninov’s motivation, and Ninov himself did not reply to an emailed request for a comment on Armbruster’s theory.

 

 

A footnote: in 2006 the Dubna group, assisted by scientists from the Lawrence Livermore Laboratory (a separate institution from the Lawrence Berkeley Lab), announced that they had succeeded in creating three atoms of element 118. This they did by a completely different reaction from the one attempted at Berkeley. It was a hot fusion reaction between calcium and californium, so the results said nothing about the validity of Smolanczuk’s theory.

Both Gregorich and Loveland expressed some caution about the reported finding; they mentioned that most of the Russians’ reported discoveries still await independent verification. But if the finding is verified, the Dubna group will get to name the new element. They may name it oganessium after their leader, Yuri Oganessian. They may name it after some historical Russian physicist – cherenkovium or zeldovium or even sakharovium. But they are unlikely to pick the name that seemed like a front-runner in 1999 – ninovium.

 

 

EPILOGUE

 

 

 

 

There but for the grace of God go I. That is my own reaction to the stories just recounted, and I think most scientists would share it. There are so many opportunities for science to go wrong that scientists who reach the end of their careers without stumbling on one of them can count themselves not just smart or circumspect or morally superior, but also fortunate.

Of course, I picked dramatic or memorable examples of scientific failure for this book. They’re not typical of how science can go wrong, because mostly it does so in more mundane ways. Just as the spectacular successes of science are mere islands in a sea of worthy journeywork, so the spectacular failures are outnumbered by those that are slightly regrettable, modestly burdensome or just partially incorrect. It is the destiny of most scientists to be neither canonised nor vilified by the judgment of history, but to be forgotten.

Similarly, most scientific accidents don’t cause dozens of deaths, as the anthrax release at Sverdlovsk did. Most don’t kill anyone, and, of those that do, most kill just one person – the very person whose mistake caused the accident. One example: in August of 1996, Dartmouth College chemistry professor Karen Wetterhahn spilled a drop or two of dimethylmercury on her gloved hand. The drops penetrated both the glove and her skin, condemning her to a slow, painful death from mercury poisoning.

The example of scientific fraud recounted in this book – Victor Ninov’s alleged fabrication of data supporting the discovery of a new chemical element – is also an extreme, unrepresentative case. Yes, there have been other cases that rival or outdo his: the fraudulent claim by South Korea’s Hwang Woo-Suk to have created human stem cells by cloning is the most dramatic recent example. But most fraud consists of slight prettying-up or cherry-picking of data, omission of references to prior work in order to make one’s own work seem more original, self-plagiarism and the like.

Indeed, fraud merges imperceptibly into acceptable scientific practice. It’s common, for example, for scientists to write up accounts of their research in which the sequence of experiments does not correspond to historical reality, or to introduce a paper with a hypothesis that wasn’t actually formulated until after some of the experimental results came in. This is often thought to be justified: it aids comprehension to present the study as a logical sequence of ideas and experiments. But such deception causes harm if it leaves the reader thinking that a result was predicted by a hypothesis, or that a hypothesis was stimulated by prior results, when they were not. This can cause a scientist’s conclusions to appear more believable than they actually warrant.

It would be an interesting exercise to go back in the scientific literature – say, 20 years or so – and pick a random selection of 100 papers and ask, ‘Were they right in their main findings and conclusions, and were they as original as their authors claimed?’ I don’t know what fraction of them would have significant faults, but it would probably be substantial and certainly much higher than most non-scientists would believe.

Most likely, those pieces of erroneous research would not have gone wrong in any memorable way – no conscious fraud, no switched labels, no blatant plagiarism – nor would they probably have any dire consequences. They probably resulted from countless trivial errors and omissions – the use of reagents whose specificity was less than expected, the selection of human subjects who were not fully representative of the group being investigated, the use of inappropriate statistical tests or a lack of familiarity with the prior literature. Probably, in the ensuing decades, no one ever took the trouble to point out that the studies were wrong or to ask why; many scientific papers are not cited even once by other scientists, after all. The rising tide of scientific progress simply erases them from collective consciousness.

Still, science does sometimes go wrong in ways that are truly dramatic – accidents or drug trials in which people are injured or killed, erroneous claims that grab media attention and that take years to set right. And sometimes scientists themselves are appalled by the uses to which their discoveries are put by others. Take foetal ultrasound monitoring, a technique pioneered by the Scottish gynaecologist and anti-abortion campaigner Ian Donald. ‘My own personal fears are that my researches into early intrauterine life may yet be misused towards its more accurate destruction,’ wrote Donald in 1972. A decade or so later, ultrasound was being used to facilitate the abortion of millions of female foetuses in the third world.

Can anything be done that might cause science to go wrong less often? Should anything be done, even? These are thorny questions that are probably best left to professional ethicists or administrators or philosophers of science, but here are a few thoughts.

For a start, it’s worth pointing out that it may take years or decades for the ill-effects of scientific discoveries and inventions to become evident. Take a field of applied science that I haven’t covered in this book – industrial chemistry. In 1901, a German chemist, Wilhelm Normann, developed a process for turning vegetable oils into solid fats by hydrogenation. At the time, this invention seemed like an unalloyed benefit to humanity: it provided the means to produce inexpensive edible fats that resisted spoilage. It took more than half a century, and millions of premature deaths from heart disease, before the harmful effects of these fats on human health became apparent. In 1928, General Motors chemist Thomas Midgley, Jr, invented chlorofluorocarbon refrigerants – Freons. Again, the invention seemed to offer nothing but benefit to humanity, and it took decades before the downside – the destructive effect of these chemicals on the Earth’s protective ozone layer – was understood. Looking back, it’s hard to see how any programme of regulatory oversight could have anticipated these dire consequences, given the lack of relevant knowledge at the time. In addition, there may never be agreement on the net benefit or harm of a discovery. It may depend on one’s views about abortion, for example, in the case of Donald’s invention. Thus, preventing the long-term ill-effects of scientific inventions and discoveries is about as hard as predicting the future of civilisation and it is probably pointless to try.

Certainly, there can and should be oversight of science, especially in its applications. In medical research, for example, there are the Institutional Review Boards and national regulatory bodies that do their best to see that research using human subjects is conducted ethically and safely. IRBs came up in several chapters of this book. I mentioned how their absence in the 1930s permitted unethical research such as Mary Tudor’s stuttering study to go forward. I described how Robert Iacono circumvented IRBs and all other regulatory oversight by taking his patient to China for experimental surgery for Parkinson’s disease. I also recounted how Jesse Gelsinger died needlessly in a clinical trial that was overseen by a whole web of IRBs and government agencies.

Other books

The Hidden (Heartfire) by Celeste Davis
A Wedding Invitation by Alice J. Wisler
Beyond This Moment by Tamera Alexander
Three Wicked Days by Trista Ann Michaels
Scored by Lauren McLaughlin
Fairytale Not Required by Stephanie Rowe
Blood of Dragons by Robin Hobb
The Testament by John Grisham