Read The Rise and Fall of Modern Medicine Online
Authors: James Le Fanu
The success of technology in so many fields of medicine encouraged doctors to believe there must be a technical solution to every problem; that, for example, foetal monitoring during labour would prevent death or damage to the baby. The argument was as follows: the shift from home to hospital deliveries
had coincided with a decline in both maternal and infant mortality rates, from which one might quite naturally infer that, thanks to medical intervention, childbirth was becoming ever safer for both mother and baby. Nonetheless, babies still died during labour (approximately 3,000 a year in the United States) while several times that number (approximately 15,000) were born with severe forms of brain damage, such as cerebral palsy. Such misfortunes, it was legitimate to presume, arose because the foetus was deprived of oxygen during the stress of labour, so further medical intervention to determine when it was âdistressed' might act as a red-alert system, prompting an emergency Caesarean to avert disaster. âSince the stress of labour is clearly capable of causing foetal death, it seems not unreasonable to assume that labour may also be a factor in producing brain damage,' observed two protagonists of this view, obstetricians Edward Quilligan and Richard Paul of the University of Southern California in 1974.
7
The inference was indeed ânot unreasonable', and appeared to be supported, they pointed out, by crude experiments on monkey foetuses which, while still within the womb, were deprived of oxygen by separating the placenta from the side of the mother's uterus. Following birth, they were killed and their brains examined, apparently revealing a particular pattern of damage âidentical to that seen in human subjects who are afflicted with cerebral palsy'.
8
Two technological developments in the late 1960s would, it was hoped, by improving on the traditional methods of assessing âfoetal distress', alert the obstetrician to the possibility the foetus was being deprived of oxygen and thus prevent the catastrophe of cerebral palsy. The first was a monitor strapped to the mother's abdomen to give a continuous read-out of the heart rate of the foetus, providing objective evidence of rapid âaccelerations' or âdecelerations' that can occur when the foetus is in
trouble. Secondly, soon after labour had begun and just as the baby was starting its descent down the birth canal, a needle was placed in its scalp, through which small quantities of blood could be removed and its acidity measured, a useful warning sign that the baby was being deprived of oxygen and thus vulnerable to brain damage. Clearly the initial costs of purchasing the necessary equipment and training the nursing staff would be considerable â estimated at around $100 million for the United States â but, argued Quilligan and Paul, this would be offset by financial savings â estimated at $2 billion â in the long-term care of brain-damaged children if their numbers were to be halved by foetal monitoring technology.
9
Throughout the 1970s, obstetricians, convinced by these compelling arguments, introduced foetal monitoring on a wide scale, only to elicit a strong backlash from the ânatural childbirth' movement representing the interests of pregnant women. The problem was that no matter how plausible the arguments might be in its favour, foetal monitoring has a seriously adverse impact on many women's experience of labour. The mother's mobility has to be severely restricted for the monitor readings to be reliable, requiring her to lie prone on her back for long periods. Meanwhile she might have one arm connected up to an intravenous drip, while a cuff is strapped to the other to keep an eye on her blood pressure. She is in effect immobilised. Such irksome restraint imposed by foetal monitoring is also unphysiological and, by denying the mother the opportunity to move around freely and adopt different positions, prolongs labour unnecessarily.
And so to the crucial question, did it work? Yes, claimed Quilligan and Paul, markedly reducing complications during labour, albeit at the cost of a considerable increase in the numbers of births by Caesarean section, as the monitor tended
to be âoversensitive', producing readings suggesting the baby was in distress when it was not.
10
The more that time passed, the less convincing these results seemed to be. Foetal monitoring was not quite the exact science its protagonists had claimed, failing (it emerged) to detect 84 per cent of the babies who suffered some degree of oxygen deprivation during birth, while âconversely most of the infants who were thought to be in foetal distress were vigorous'. By the early 1980s the
British Medical Journal
, in marked contrast to its enthusiastic endorsement of the aspirations of foetal monitoring a decade earlier, had become disillusioned by its many technical difficulties. âThe foetal heart rate pattern correlates poorly with the acid-base balance (the acidity of the blood obtained through the scalp needle) . . . foetal outcome depends not only on the correct interpretation of data but also on appropriate action by the staff in the obstetric unit.'
11
The vogue for foetal monitoring would, like other medical fashions, probably have slowly withered away, were it not for the intervention of the lawyers. The drawback of foetal monitoring, which was not well appreciated when it was first claimed to prevent âadverse outcomes' such as cerebral palsy, is that when children are born so affected it is ânot unreasonable' for the parents to assume negligence on the part of the obstetrician for failing to act on the evidence of an âabnormal' heart reading (and in court virtually any reading, in the hands of a hostile expert witness, could be shown to be âabnormal', undermining the original claims that it provided an objective assessment of the child's progress).
In Britain between 1983 and 1990 the number of cases where such negligence was alleged tripled, as did the scale of the financial compensation paid out, an average of £700,000 per case. Litigation against obstetricians, who constitute only
2.5 per cent of medical practitioners, now accounts for 30 per cent of the legal costs and damages sustained by the profession.
12
This is clearly a most invidious situation. The birth of each and every âless than perfect' child can, with the help of a clever lawyer, be blamed on the negligence of the obstetrician in charge. Their only defence is to deny the rationale upon which foetal monitoring had originally been conceived, that oxygen deprivation at birth is a common and preventable cause of brain damage â which it is not. While the maternal and foetal mortality rates have fallen continuously from the 1950s onwards, the number of cases of cerebral palsy has remained virtually unchanged. This can only mean that the majority of cases â probably 90 per cent â of cerebral palsy cannot result from events occurring during childbirth, but must be caused by some abnormality of the development of the brain much earlier in pregnancy. The whole episode had been âa catastrophic misunderstanding', according to one obstetric journal, where the expectation that foetal monitoring could prevent brain damage in children was based on âfalse analogy and assumptions'. Obstetricians had âshot themselves in the foot'.
13
The most curious aspect of this saga is that right from the beginning dispassionate observers had warned obstetricians of the âfalse assumptions' behind foetal monitoring, and indeed these should have been clear to obstetricians themselves. They would have known from their personal experience that not all babies consequently shown to have cerebral palsy had experienced particularly difficult or complicated labours; but the profession was seduced into thinking otherwise by the promise of the power of technology to provide solutions.
14
The third and most significant type of misuse of technology is the use of life-sustaining technologies to prolong the process of dying. The principles of intensive care pioneered by Dr Bjorn Ibsen in the Copenhagen polio epidemic of 1952 to keep children alive long enough for the strength of their respiratory muscles to recover may save thousands of lives a year but they had also, by the mid-1970s, become diverted into a means of prolonging â at enormous cost â the pain and misery of terminal illness. Thus a United Press Agency bulletin describing General Franco's final illness in 1975 reported:
At least four mechanical devices are being used in the battle for General Franco's survival. A defibrillator attached to his chest shocks his heart back to normal when it slows or fades; a pump-like device helps push his blood through his body when it weakens; a respirator helps him breathe and a kidney machine cleans his blood. At various times in his 25-day crisis General Franco has had tubes down his windpipe to provide air, down his nose to provide nourishment, in his abdomen to drain accumulated fluids, and in his digestive tract to relieve gastric pressure. The effort in itself is remarkable considering he has had three major heart attacks. He has undergone emergency surgery twice, once to patch a ruptured artery to save him from bleeding to death, the second time to remove most of an ulcerated and bleeding stomach for the same reason. He has taken some four gallons of blood transfusion. His lungs are congested . . . his kidneys are giving out and his liver is weak. Paralysis periodically affects his intestines . . . he suffers occasional rectal bleeding. Blood clots
have formed and spread in his left thigh. Mucus accumulates uncontrollably in his mouth.
15
General Franco, being an important man, might have been expected to have received preferential treatment, but this account of his dying days is little different from that of thousands of patients who have had the misfortune to spend their last moments on a modern-day intensive-care unit, where, as one organ system fails after another, its function must be taken over by some technological means in the increasingly unlikely anticipation of eventual recovery. This is a costly business. By 1976 one-half of medical expenditure in the United States was incurred in the last sixty days of a patient's life. âThe furore over the high economic costs of dying parallels concern over its high emotional cost,' observed Muriel Gillick of the Hebrew Rehabilitation Center for the Aged in Boston, commenting on a report in the
New York Times
that showed âa significant segment of the public believes that doctors cruelly and needlessly prolong the lives of the dying [for reasons] of avarice and a passion for technology, which leads them to use procedures to excess, unmindful of the suffering they may inflict on patients'.
16
The fault was certainly not all on the side of the doctors, who, pressurised by relatives or fearful of subsequently being charged with negligence, felt they had little alternative other than to demonstrate that âno stone had been left unturned'. Paralleling the Church's last rites, medicine too now had its last rite â the compulsory period on the ventilator without which a patient was not allowed to die in hospital. Thus an analysis of the outcome in almost 150 patients severely ill with cancer who had been admitted to the intensive-care unit of one hospital in southern Florida over a two-year period found that more than
three-quarters of those who had survived to go home had died within three months.
17
Such misuse of intensive-care facilities is a telling sign of the degree to which medical technology has spiralled out of control. There was nothing that could be done about it. By 1995, twenty years after General Franco's grisly demise, expenditure on intensive care in the United States had escalated to $62 billion (equivalent to 1 per cent of the nation's GNP), one-third of which â $20 billion â was being spent on what had euphemistically come to be known as PIC or potentially ineffective care. The consequences for those on the receiving end of PIC, who were âhopelessly entrapped by machinery more sophisticated than the ethics governing its use', is poignantly illustrated by the parental description of the six months spent by a premature baby, Andrew, in a paediatric intensive-care unit:
The long list of Andrew's afflictions, almost all of which were iatrogenic [doctor-induced] reveals how disastrous his hospitalisation was. He was âsaved' by the respirator to endure countless episodes of bradycardia (slowing of the heart), countless suctionings, tube insertions and blood samplings and blood transfusions; âsaved' to develop numerous infections, demineralised and fractured bones, and seizures of the brain. He was in effect âsaved' by the respirator to die five long painful expensive months later of the respirator's side-effects . . . By the time he was allowed to die, the technology being used to âsalvage' him had produced not so much a human life as a grotesque caricature of a human life, a âperson' with a stunted deteriorating brain and scarcely an undamaged vital organ in his body, who existed only as an extension of a machine. This is the image left to us for the rest of our lives of our son, Andrew.
18
The portrayal of the three forms of âinappropriate' use of technology may seem unnecessarily bleak, but it is merely the mirror image of the transforming power of the technological innovation so essential to the post-war therapeutic revolution. The culprit is not technology itself, but the intellectual and emotional immaturity of the medical profession, which seemed unable to exert the necessary self-control over its new-found powers.
âT
he clinical scientist as an endangered species', as identified by James Wyngaarden in his presidential address to the Association of American Physicians in 1979,
1
is the third and last indication of the End of the Age of Optimism. The number of doctors awarded traineeships for postdoctoral research by the National Institutes of Health, Dr Wyngaarden had noted, had declined by half over the previous decade, with the obvious implication that doctors qualifying in the 1970s were less enthusiastic about research than earlier generations. This he attributed, at least in part, to âthe seductive lure of the high incomes that now derive from procedure-based specialty medicine'. And what does that mean? Specialists like gastroenterologists and cardiologists had, as described in the previous chapter, acquired unique skills or âprocedures' such as endoscopy or cardiac catheterisation, for which they were able to charge a lot of money in private practice, or, as Wyngaarden described it: âA high proportion of young doctors who in the past have been willing to delay economic gratification and indulge a curiosity
in research now exhibit the “young physician-Porsche syndrome”.' There may be an element of truth in the allegation that this generation of doctors, the first to have been untouched by the scientific idealism of the post-war years, may have preferred to graze in the lucrative fields of private practice, deploying their newly acquired skills in endoscopy and catheterisation rather than pursuing the intellectual excitement of the research laboratory. There is, though, another much more important reason why young doctors found research a less attractive option: the revolution of clinical science as initiated by Sir Thomas Lewis and carried on by John McMichael and his contemporaries had become exhausted.