Authors: John Abramson
Each of the constituents of this complex system felt threatened by the limitation on medical expenditures. Though I have no proof, I strongly suspect that the parties that had the most to lose financially—the drug, medical equipment, and hospital industries; and the specialty care doctors—played the biggest role in fanning the flames of public disgruntlement. When public opinion turned so strongly against the measures necessary to control health care costs, the insurance companies had no choice but to loosen their management of care.
Yearly increases in health insurance premiums
once again started to balloon out of control, rising steadily from a low 2 percent annual increase in 1996 to 13.9 percent in 2003.
Ironically, the move into managed care created a historic opportunity for the medical industry. The cost-containment potential of HMOs and managed care plans was, initially, a serious threat to drug companies and medical device manufacturers. But the broader coverage offered by the new plans turned out to have the most profound unintended consequences: Instead of containing health care costs, HMOs and managed care plans facilitated the almost unrestrained increases in health care spending that followed. The captains of the drug and other medical industries certainly hadn’t planned this, but they knew how to take advantage of opportunity when it came knocking on their door. After a brief period of clear skies, dark clouds could be seen gathering on the horizon.
Comparisons both within the United States and between countries show that
access to comprehensive, family-oriented primary care
service is the distinguishing characteristic of health care systems that are both effective at producing good health and efficient at controlling costs. Nonetheless, American medicine has become heavily dominated by specialty care over the past 40 years.
In 1965 there were as many
primary care doctors as specialists in the United States. Since then, the ratio of primary care doctors to the U.S. population has remained about the same, while the ratio of specialists has more than doubled.
Most
health policy experts recommend that between 42 percent and 50 percent
of doctors in the United States should be primary care doctors. Instead 31 percent of doctors in the United States practice primary care and 69 percent are specialists. In order to correct this imbalance, the Council on Graduate Medical Education (a body established by Congress to make recommendations about the supply and distribution of doctors to the U.S. Department of Health and Human Services) recommended training at least 50 percent of physicians as primary care doctors. In 1998, this goal was not being met.
Only 36 percent of U.S. medical students
that year reported that primary care was their first choice of specialty. And to show how quickly the medical environment is changing, only four years later, interest in primary care among U.S. medical students plummeted by 40 percent, so that only about one out of five students (21.5 percent) identified primary care as his or her first choice.
A number of factors turn medical students away from careers in primary care. The intellectual culture within the academic medical centers where students are trained is dominated by specialists, whose ideals of “good” and “real” medicine are very different from the kinds of challenges faced by primary care doctors. A survey of medical students showed that
only three out of 1000 thought that good students
were encouraged to go into primary care fields. Most doctors are in their late twenties or early thirties when they finish their training. They finish with an
average debt of over $100,000
, at just about the time they want to start a family and get on with their lives. The starting salary for many specialties is more than twice that of primary care doctors. And to make this choice even more difficult, the
boundary between professional responsibilities and personal time
is often more blurred in primary care than in other specialties.
Nobody can blame these young doctors for not choosing primary care—it takes a tremendous amount of commitment and idealism to choose a career that is not supported by role models in training, carries less prestige among peers, intrudes more into one’s personal life, and pays far less than most other specialties. A bright and concerned Harvard Medical School student lamented to me that he really wanted to become a pediatrician and take care of children in a community-based practice, but his enormous debt was forcing him into a more lucrative subspecialty. The same story is heard over and over.
In addition to the growing imbalance between primary care doctors and specialists, the ever-present threat of malpractice litigation is also increasing the cost of American medical care. This threat may provide some protection to patients and allow recourse for substandard care, but the justice meted out is inconsistent. In a
New York Times
op-ed piece, Philip K. Howard, author of
The Collapse of the Common Good: How America’s Lawsuit Culture Undermines Our Freedom,
commented that most of the doctors
who do commit malpractice are not sued
, and most of the lawsuits brought against doctors are about situations in which malpractice was not committed. Nonetheless, the current medical malpractice system consistently distorts our medical care. Doctors are aware of the risk of a malpractice suit lurking in every patient visit.
Three-fifths of doctors in the United States admit
that they do more diagnostic testing than is necessary because of the threat of litigation. And why not? The risk of ordering an extra test is nil, but the threat of a lawsuit because of a test not ordered is ever present—even when the likelihood of serious disease is very low and reasonable professional judgment would say the test was not necessary.
These extra tests can and often do
set off a cascade effect
, requiring even more tests to follow up on abnormal results, many of which then turn out to be normal. With the specter of malpractice looming, doctors feel justified in ordering almost any test, including tests in which they have a financial interest.
The rising cost of malpractice insurance is causing a rebellion among doctors forced to pay the price for our litigious culture (and a few bad doctors) regardless of their own track record and commitment to quality care. Some, caught between the ever-present fears of litigation and the mounting costs of insurance, are shielding their assets and practicing without insurance, while others are leaving the practice of medicine altogether.
At the same time that all of this is happening, the medical information available to doctors (and to their patients) is increasingly dominated by commercial interests. The skies are darkening.
Within the FDA, the doctors, scientists, and statisticians are dedicated to making sure the data about drugs and medical devices presented by manufacturers justify their claims of safety and efficacy. But the FDA is understaffed, underfunded, and under pressure, according to its own employees. Even worse, the FDA has fallen under the influence of the drug and medical-device industries, so much so that it was labeled
“a servant of industry”
by Dr. Richard Horton, the editor of the British journal
The Lancet
.
The FDA used to be famous for moving at a glacial bureaucratic pace. In 1980, the General Accounting Office of Congress reported that the FDA was inadequately staffed to keep up with its workload. In 1988, political action by
AIDS activists drew attention
to the very real need for quicker access to potentially lifesaving drugs. The ensuing political crisis resulted in the 1992 passage of the Prescription Drug User Fee Act, otherwise known as PDUFA. The drug companies agreed to pay a $300,000 fee for each new drug application; in return, the FDA’s Center for Drug Evaluation and Research promised to adhere to a speedier timetable for the new drug approval process. According to a
2002 GAO report
, a little more than half the cost of reviewing new drug applications was funded by user fees from the drug industry.
New-drug approval certainly became quicker. With PDUFA funds, the FDA was able to
increase the staff at the Center for Drug Evaluation and Research, or CDER, from 1300 to 2300
, all assigned to expedite new-drug applications for patented (not generic) drugs. In the four years following the enactment of PDUFA, the median length of time the FDA took to decide on priority new-drug applications dropped from 20 months down to six months. At the same time, the average number of new drugs approved doubled.
Funding by drug companies may have seemed like a good idea for the cash-strapped FDA, but what about protecting the consumer from the drug companies’ influence? How unbiased can CDER be when half its budget comes from the drug companies themselves? An anonymous survey done by
Public Citizen in 1998
revealed that FDA review officers felt that standards had declined as pressure to approve new drugs increased. The FDA medical officers who responded to the survey identified 27 new drugs that had been approved within the previous three years that they felt should not have been. A similar report on CDER by
the inspector general of the U.S. Department of Health and Human Services
, published in March 2003, found that 58 percent of the medical officers said that the six months allotted for review of priority drugs is not adequate, and that one-third of respondents did not feel comfortable expressing their differing opinions. In the FDA’s own
Consumer Magazine,
Dr. Janet Woodcock, director of CDER since 1994, wrote that tight deadlines for drug approval were
creating “a sweatshop environment
that’s causing high staffing turnover.”
The most dangerous consequence of these changes was that the number of
drugs approved by the FDA but later withdrawn
from the market for safety reasons increased from 1.6 percent of drugs approved between 1993 and 1996 to 5.3 percent between 1997 and 2000. Seven drugs that had been approved by the FDA after 1993 were withdrawn from the market because of serious health risks. The
Los Angeles Times
reported that these drugs were suspected of
causing more than 1000
deaths (though the number of deaths could actually be much higher because reporting of adverse drug events to the FDA is voluntary). Even though none of these seven drugs was lifesaving, according to the
Los Angeles Times,
“the FDA approved each of those drugs while disregarding danger signs or blunt warnings from its own specialists.” All told, 22 million Americans, one out of every 10 adults, had taken a drug that was later withdrawn from the market between 1997 and 2000.
The blood sugar–lowering diabetes drug
Rezulin is one of the drugs
that was approved in haste by the FDA—and later withdrawn, but much too late for many Americans. The details of the story were first presented in 2000 in a
Pulitzer Prize–winning series of investigative reports
by David Willman of the
Los Angeles Times
. Remarkably, as quickly as medical news travels, this story had no “legs” and went largely unheeded. Three years later David Willman wrote a similar story showing that the same problems were still there.
Dr. Richard Eastman was the director of the NIH division
in charge of diabetes research, and in charge of the $150 million Diabetes Prevention Program study. This large study was designed to determine whether diabetes could be prevented in people at high risk (overweight and with mildly elevated blood sugar levels) by drugs or by lifestyle interventions. In June 1996
Dr. Eastman announced that Rezulin
had been selected as one of the two diabetes drugs to be included in the study—a real victory for Warner-Lambert, the manufacturer of Rezulin.
Also in 1996
Warner-Lambert submitted Rezulin to the FDA
for approval, and it became the first diabetes drug to be given an accelerated review. The medical officer evaluating the new drug application, Dr. John L. Gueriguian, was a 19-year veteran of the FDA. His review recommended that Rezulin not be approved: the drug appeared to offer no significant advantage over other diabetes drugs already on the market, and it had a worrisome tendency to cause inflammation of the liver. Warner-Lambert executives
“complained about Gueriguian
to the higher-ups at the FDA.” Dr. Gueriguian was then removed from the approval process for this drug. When the Advisory Committee met to decide on the approval of Rezulin, they were not informed of Dr. Gueriguian’s concerns about liver toxicity. The FDA approved Rezulin in February 1997, and brisk sales soon earned it “blockbuster” status.
However,
reports of fatal liver toxicity due to Rezulin
soon started to appear. Notwithstanding reports of deaths in the United States as well as in Japan, and the withdrawal of the drug from the United Kingdom because of liver toxicity in December 1997, Dr. Eastman and his colleagues decided to continue treating volunteers in the Diabetes Prevention Program study with Rezulin. Only after
Audrey LaRue Jones
, a 55-year-old high school teacher, died of liver failure in May 1998 did Rezulin stop being given to the volunteers in the study. Warner-Lambert maintained that Rezulin was not responsible for the liver failure that led to her death.
Despite the mounting reports of liver problems in the United States, Rezulin was not withdrawn from the U.S. market until March 2000. By that time,
$1.8 billion worth of the drug
had been sold. The
Los Angeles Times
reported that, all told, Rezulin was suspected in 391 deaths and linked to 400 cases of liver failure. Looking back on his experience,
Dr. Gueriguian told the
Los Angeles Times,
“Either you play games or you’re going to be put off limits . . . a pariah.”
Another FDA medical officer and
former supporter of Rezulin
, Dr. Robert I. Misbin, was threatened with dismissal by the FDA. His offense? He provided a copy of a letter to members of Congress from himself and other physician colleagues at the FDA expressing concern about the FDA’s failure to withdraw Rezulin from the market after the FDA had linked it to 63 deaths due to liver failure.
Dr. Janet B. McGill, an endocrinologist
who had participated in Warner-Lambert’s early studies of Rezulin, told the
Los Angeles Times
that Warner-Lambert “clearly places profits before the lives of patients with diabetes.”