Pharmageddon (15 page)

Read Pharmageddon Online

Authors: David Healy

BOOK: Pharmageddon
8.41Mb size Format: txt, pdf, ePub

The second is that in the case of company trials, the association that is marketed will have been picked out in a boardroom rather than at the bedside. One of the most dramatic examples of what this can mean comes from the SSRIs, where the effects of these drugs on sexual functioning are so clear that controlled trials would be merely a formality. In contrast, hundreds of patients are needed to show that a new drug has a marginal antidepressant effect. Yet the marketers know that with a relentless focus on one set of figures and repetitions of the mantra of statistical significance they can hypnotize clinicians into thinking these drugs act primarily on mood with side effects on sexual functioning when in fact just the opposite would be the more accurate characterization. Because it has become so hard to argue against clinical trials of this nature, there is now almost no one at the séance likely to sing out and break the hypnotic spell.

A cautionary tale involving reserpine may bring home how far we have traveled in the last half century. In the early 1950s, medical journals were full of reports from senior medical figures claiming the drug worked wonderfully to lower blood pressure; what was more, patients on it reported feeling better than well.
41

Reserpine was also a tranquilizer and this led Michael Shepherd, another of Bradford Hill's protégés, in 1954 to undertake the first randomized controlled trial in psychiatry, in this case comparing reserpine to placebo in a group of anxious depressives.
42
While reserpine was no penicillin, some patients were clearly more relaxed and less anxious while on it, so it was something more than snake oil. Shepherd's trial results were published in the
Lancet
, a leading journal; nevertheless, his article had almost no impact. The message sank without trace, he thought, because medicine at the time was dominated not by clinical trials but by physicians who believed the evidence of their own eyes or got their information from clinical articles describing cases in detail— “anecdotes”—as they would now be called.
43

Ironically the two articles preceding Shepherd's in the same issue of the
Lancet
reported hypertensive patients becoming suicidal on reserpine.
44
Reserpine can induce akathisia, a state of intense inner restlessness and mental turmoil that can lead to suicide. The case reports of this new hazard were so compelling, the occurrence of the problem so rare without exposure to a drug, and the onset of the problem subsequent to starting the drug plus its resolution once the treatment was stopped so clear that clinical trials were not needed to make it obvious what was happening. On the basis of just such detailed descriptions, instead of becoming an antidepressant, reserpine became a drug that was said to cause depression and trigger suicides. But the key point is this—even though superficially contradictory, there is no reason to think that either the case reports or the controlled trial findings were wrong. It is not so extraordinary for a drug to suit many people but not necessarily suit all.

Fast forward thirty-five years to 1990. A series of trials had shown Prozac, although less effective than older antidepressants, had modest effects in anxious depressives, much as reserpine had. On the basis of this evidence that it “worked,” the drug began its rise to blockbuster status. A series of compelling reports of patients becoming suicidal on treatment began to emerge, however.
45
These were widely dismissed as case reports—anecdotes. The company purported to reanalyze its clinical trials and claimed that there was no signal for increased suicide risk on Prozac in data from over three thousand patients, when in fact there was a doubling of the risk of suicidal acts on Prozac but this increase was not statistically significant and thus was ignored. Even if Prozac had reduced suicide and suicidal-act rates, it would still be possible for it to benefit many but pose problems to some. But the climate had so shifted that instead the fuss generated by the Prozac case reports added impetus to the swing of the pendulum away from clinical reports in favor of controlled trials.

But as we saw in the analysis of antidepressants, in addition to the 40 percent who responded to placebo, a further 50 percent of patients (five out of ten) did not respond to treatment, so that in only publishing controlled trials and not the convincing reports of hazards for treatments like the antidepressants, journals are privileging the experiences of the one specific drug responder over the nine-fold larger pool of those who in one way or another are not benefitting specifically from the drug. Partly because of selective publication practices, partly because of clever trial design, only about one out of every hundred drug trials published in major journals today is likely to do what trials do best—namely, debunk therapeutic claims. The other ninety-nine are pitched as rosily positive endorsements of the benefits of statins or mood stabilizers, treatments for asthma or blood pressure or whatever illness is being marketed as part of the campaign to sell a blockbuster.

The publishing of company trials in preference to carefully described clinical cases, allied to the selective publication of only some trials of a drug, and interpretations of the data that are just plain wrong amounts to a new anecdotalism. The effect on clinical practice has been dramatic. Where once clinicians were slow to use new drugs if they already had effective treatments, and when they did use the new drug, if their patients had a problem, they stopped the treatment and described what had happened, we now have clinicians trained to pay heed only to controlled trials—clinicians who, on the basis of evidence that is much less generalizable than they think, have rapidly taken up a series of newer but less effective treatments.

The development of randomized controlled trials in the 1950s is now widely acclaimed as at least as significant for the development of medicine as any of the breakthrough drugs of the period. If controlled trials functioned to save patients from unnecessary interventions, it would be fair to say they had contributed to better medical care. They sometimes fill this role, but modern clinicians, in thrall to the selective trials proffered up by the pharmaceutical companies, and their embodiment in guidelines, are increasingly oblivious to what is happening to the patients in front of them, increasingly unable to trust the evidence of their own eyes.

We have come to the outcome that Alfred Worcester feared but not through the emphasis on diagnosis and tests that so concerned him. It has been controlled trials, an invention that was designed to restrict the use of unnecessary treatments and tests, which he would likely have fully approved of, that has been medicine's undoing.

This company subversion of the meaning of controlled trials does not happen because of company malfeasance. It happens because both we and our doctors as well as the government or hospital service that employs our physicians, in addition to companies, all want treatments to work. It is this conspiracy of goodwill that leads to the problems outlined here.
46
But in addition to this, uniquely in science pharmaceutical companies are able to leave studies unpublished or cherry-pick the bits of the data that suit them, maneuvers that compound the biases just outlined.

Two decades after introducing the randomized controlled trial, having spent years waiting for the pendulum to swing from the personal experience of physicians to some consideration of evidence on a large scale, Austin Bradford Hill suggested that if such trials ever became the only method of assessing treatments, not only would the pendulum have swung too far, it would have come off its hook.
47
We are fast approaching that point.

4

Doctoring the Data

By 1965, the flood tide of innovative compounds ranging from the early antibiotics to the first antipsychotics that had transformed medicine in the 1950s appeared to be ebbing. Desperate to continue with business as usual, the pharmaceutical industry had to decide if it made business sense to allow its researchers to pursue scientific innovations in quite the ad hoc way that had worked so well for the previous two decades. This was the question the major drug companies put to a new breed of specialists, management consultants, who were called in to help them reorganize their operations with a view to maintaining the success of previous decades. The answers these consultants provided have shaped not only industry but also the practice of medicine ever since.

In the preceding decades, scientists working within pharmaceutical companies took the same approach to research that scientists based in universities did: they conducted wide-ranging, blue-skies research out of which new compounds might serendipitously fall and for which there might initially be no obvious niche—as had once been the case for a host of drug innovations that later became huge money makers, including oral contraceptives, the thiazide antihypertensives, the blood-sugar- lowering tolbutamide, chlorpromazine and subsequent antipsychotics, and imipramine and later antidepressants. But under changed conditions and the coming of the consultants, the mission changed to one in which clinical targets were to be specified by marketing departments and pursued in five-year programs. If that meant discarding intriguing but unplanned leads, so be it.

Where once pharmaceutical companies had been prospectors for drugs, more like oil exploration companies, they now changed character. Their share prices had soared but these were now dependent on the recommendations of analysts who scrutinized the company's drug pipeline and business plans. Accordingly companies had to do business in a different way. Even though the best way to find new drugs is to watch closely for drug side effects in people who are taking them, just as simply drilling oil wells is still the best way to find oil, this avenue of drug development was cut off.

Fatefully, in tandem with these corporate changes a second wave of drug development had come to fruition. The original, serendipitous discoveries of the 1940s had not only offered stunning new treatments but also greatly advanced our understanding of biology. Out of this new understanding came a further group of compounds, like James Black's beta-blockers for hypertension and H-2 antagonists for ulcers, as well as Arvid Carlsson's selective serotonin reuptake inhibiting antidepressants (SSRIs). This second wave initially gave hope to those who like business plans—it appeared that drug development could be made rational and predictable in a manner that might fit into a business model. But since the 1970s, this new tide has also gone out. The number of new drugs registered yearly and the number of breakthrough compounds has dropped dramatically, leading companies to hunt for new solutions, one of which has been to outsource drug development to start-up companies.

While the changes in drug development programs that began in the 1960s have been enormous, the key reorganization came at the marketing and clinical-trial end of company operations, and these have transformed pharmaceutical outfits into companies who market drugs rather than companies who manufacture drugs for the market.
1
As it happened, these corporate changes coincided with three other developments that were to have far more profound effects on the drug industry than any management consultant in the 1960s would likely have supposed. It was one thing to reorganize pharmaceutical companies, it was quite another to end up with almost complete control of therapeutics.

The first of these developments was a decline in US government funding for clinical research beginning in the 1960s. If industry was already funding many studies in order to get approval for drugs, why not let them carry an even larger share of the burden? So the thinking went. For well-done randomized controlled trials, it shouldn't make much difference where the funding came from.

The second development in both the United States and Europe was a huge expansion in university training and in the health services industry. This led to a rapidly growing number of medical academics. Where before the pharmaceutical industry had been beholden to a handful of magisterial figures, any of whom could make or break a new drug, now companies could shop around among eager young academics willing to manage a trial in return for the kudos of being a principal investigator and the fees that came with the exercise. These new academics, more vulnerable in their careers and more open to the seduction of being made into opinion leaders, could be expected to accommodate themselves to company interests in a way that their older counterparts might not. Besides, there was little these academics had to do other than be notional investigators—companies could now supply them readymade versions of clinical trial protocols that the previous generation of academics had developed during the 1950s.

The third development was the increasing dispersion of the locus of clinical trials. Where in the 1950s and early 1960s trials had often been conducted in a single university or hospital, by the late 1960s they were typically multicentered, and by the 1970s they had become multinational. This did not happen because the drugs and the testing were better. Quite the contrary, the more obviously effective the drug the smaller the trial needed to demonstrate that it works, although large trials might still be needed to reveal possible hazards of treatment. The newer drugs were in fact less effective and required larger trials to show benefits that could be considered statistically significant. Increase in the number of participants in typical trials and the geographic proliferation of sites had far-reaching implications; principal investigators could no longer know all the patients, could no longer supervise all the raters, and might no longer be able to speak authoritatively to side effects that may only have been witnessed on sites other than their own. Where before investigators had the data for the whole study, now they typically had a record only of the data from their own site and just a summary of the rest. Companies or their proxies increasingly did the shuttling between sites, and it made sense to lodge the data somewhere central—such as company headquarters. When investigators requested full access to the data—the answer was no, the information was proprietary. Without accessible data, these trials had the appearance of science but were no longer science.

These three developments changed the interface between medicine and industry. Where clinical trials were once a scientific exercise aimed at weeding out ineffective treatments, they became in industry's hands a means to sell treatments of little benefit. The key to this has been— and still is—industry control of the data from “their” trials, which is then selectively described in articles designed to establish market niches by selling diseases the new drug happens to address or to counteract adverse publicity about treatment hazards.

THE APPEARANCES OF SCIENCE

As part of an effort to reengineer their drug development and marketing strategies in the 1970s, the major pharmaceutical companies began to outsource not only their clinical trial division but their medical writing division as well. The job of running clinical trials went to a new set of companies—clinical research organizations (CROs). The early CROs included Quintiles (set up in 1982), Parexel (in business since 1984), and Covance (started in 1987), with a growing number of companies such as Scirex and Target Research coming onstream in the 1990s. By 2010, the clinical trial business was worth $30 billion and CROs were running more than two-thirds of the clinical trials undertaken by industry.
2

In the 1950s and 1960s, controlled trials involving drugs were either funded by independent agencies such as the National Institutes of Health or involved drug companies handing over supplies of new compounds to medical academics who devised the clinical trials to test out the new remedy. These professors and their colleagues personally witnessed the giving of placebos that were outwardly indistinguishable from the trial drugs, along with both older and new drugs to patients. They interviewed the patients and completed the rating scales themselves or personally taught members of their team how to do so. When the data were finally assembled, professors would analyze them and later store the data in their filing cabinet for consultation should questions arise. While some academics availed themselves of the new opportunities to supplement their income, in many instances, the investigators did the work without charge—the new drugs were unquestionably moving medical science forward, which for some was payment enough.

When a lead investigator wrote a paper for a scientific journal representing what the trial had shown, the resulting article reflected a judgment on the new compound based on familiarity with other compounds in the field. When it came to presenting the findings at an academic meeting, the professor was there to answer audience questions about potential hazards of the new compound or other issues not covered in the article or presentation. At meetings in the 1950s and 1960s, entire symposia were dedicated to the hazards of new treatments, where now it is as difficult to find mention of a treatment's hazards at major academic meetings as it would be to find snow in the Sahara.

Few clinicians or others outside industry noticed any change during the transition from the 1960s's way of doing things to the 1980s's way. The trials under CRO auspices had all the appearances of previous clinical research. But from the 1980s onward, these trials increasingly diverged from previous clinical research practices.

The CROs competed among themselves for clinical trial business on the basis not only of price but also of ensuring rapid access to patients and timely completion of competent study reports. Clinicians who enrolled suitable research subjects were paid per patient, and paid more for patients who completed a course of treatment. There was no one to keep an eye on whether those recruited and deemed to have a particular disorder actually had the disorder, however. In addition, patients were increasingly recruited by advertisement rather than from clinical care. And an increasing number of patients reported in these trials didn't exist. In 1996, as one indicator of this trend, Richard Borison and Bruce Diamond from the Medical College of Georgia were jailed for the conduct of their clinical trial business, which recruited nonexistent patients to trials of many of the antidepressants and antipsychotics now on the market.
3

Where once clinical trial protocols had to be approved by a hospital or university ethics committee—an institutional review board—now they may be subject only to the CRO's privatized review system for company studies. And where university or hospital review boards typically commented on the science in addition to the ethics of a study, often forcing researchers to improve their designs, privatized committees may simply nod company protocols through. As the clinical trial business grew and competition for patients increased, CROs initially moved trials out of university and hospital settings and contacted physicians in general practice to get study subjects. As getting patients in the United States and Europe became more difficult, even from primary care physicians, CROs began moving trials first to the former Eastern European bloc during the late 1990s and subsequently to Asia and Africa. Regardless of location, though, it's still likely to be a Western academic's name that appears as the notional principal investigator on the trial protocol or subsequent articles.

But the key difference in the shift toward privatization of trial research lies in what happens with the data. Where once a professor might have analyzed the data from a trial, now personnel from the CRO collect the records from each center and assemble them back at base. The events in a patient's medical records or the results of investigations are then coded under agreed-upon headings from dictionaries listing side effects. These items are then cumulated in tabular form. Such tables are the closest that any of the academic investigators are likely to get to the raw data, except what they themselves collected. If later asked at an academic meeting what happened to patients on the drug, this is what they will cite. In this way the blind lead the blind—giving a whole new meaning to the idea of a double-blind study.

For instance, academics presenting the results of clinical trials of Paxil for depression or anxiety in children wrote about or showed slides containing rates of emotional lability on Paxil and placebo. The presentations were relatively glib, and there is no record of any member of any audience being concerned by higher rates of emotional lability on Paxil, probably because none of those involved realized that emotional lability is a coding term that covers suicidality; in fact, had they had the raw records in front of them clinicians would have realized there was a statistically significant increase in rates of suicidal acts following commencement of treatment with Paxil.

Other decisions about how the data is presented can also drastically affect how a drug's effects are perceived. For instance, if a patient experiences a sequence of side effects starting with nausea, then several other side effects, and those are followed by suicidality, and the person then drops out of treatment, if nausea is listed first rather than the suicidal act that actually triggered the drop out, this patient will likely count as a drop out for nausea rather than suicide. Unless there is someone at the heart of the system who understands the issues, the default in these new trial procedures conceals rather than reveals problems.

But there is no independent academic at the helm any more. The first semijudicial setting in which these issues were tested involved Richard Eastell and Aubrey Blumsohn from Sheffield University in England in a hearing in 2009. Eastell and Blumsohn had been senior investigators on a study of the effects of Proctor and Gamble's Actonel for osteoporosis. Behind the veneer of professional ethics lies another world, as Aubrey Blumsohn found out when Proctor and Gamble set about ghostwriting articles for him. A senior company figure wrote to him “to introduce you to one of The Alliance's external medical writers, Mary Royer. I've had the great privilege of working closely with Mary on a number of manuscripts, which The Alliance has recently published. Mary is based in New York and is very familiar with both the risedronate (Actonel) data and our key messages, in addition to being well clued up on competitor and general osteoporosis publications.”
4
Blumsohn found himself faced with articles written up under his name containing tables of “data.” He asked to see the raw data, but access was refused. Using subterfuge, he eventually managed to get hold of the data and found that the tables had been cropped to leave out a great deal of data; when this information was included the claims being made for the drug did not hold up.
5
He withdrew from authorship, a move that ultimately cost him his job at Sheffield.

Other books

Happily Ever After by Harriet Evans
Mr. Monk is Cleaned Out by Lee Goldberg
The Unsuspected by Charlotte Armstrong
The Return by Christopher Pike
Hostile Witness by William Lashner