Surviving the Medical Meltdown (5 page)

BOOK: Surviving the Medical Meltdown
12.79Mb size Format: txt, pdf, ePub

Hypothyroidism is a common condition – some estimate 80 percent of Americans suffer some degree of this abnormality. In hypothyroidism, the gland in the neck that produces thyroid hormone fails to produce enough to run the body’s systems. Thyroid hormones are like the fuel for your metabolic furnace. Inside every cell in the body are little mitochondria, which are the power generators of the cells. Thyroid hormones control the activity of the mitochondria as your thermostat controls your furnace. Too much of the hormones and your body temperature is set higher, your metabolism goes up, and you can experience a variety of problems, such as heart fluttering, anxiety, and unwanted weight loss. “Low thyroid” means
your thyroid doesn’t produce enough hormones, and the opposite happens: your body shuts down the furnace (mitochondria) so your temperature goes down and metabolic processes are slowed. This results in always feeling cold – cold hands and feet – loss of eyebrows and eyelashes, weight gain, depression and slow thinking, sluggishness, thyroid nodules, an enlarged thyroid gland, or diminished heart function. To work well, the thyroid has to have several things – stimulation from TSH or “thyroid-stimulating hormone,” iodine as a basic building block of the hormone, and selenium for the enzyme system that makes the hormone active. Hypothyroidism happens when any or all of these are deficient.

The American diet is woefully deficient in iodine. Iodine has many good properties, including an anticancer effect, but mostly it is necessary to produce thyroid hormone. The Midwest, where I live, has been known as the “goiter belt” because our diet traditionally had little seafood and, thus, little iodine. Iodine used to be added to food as a stabilizer, but unfortunately, after World War II, bromine was substituted for iodine. This is bad for several reasons. First, bromine is not iodine, so it doesn’t help make thyroid hormone. Second, bromine inhibits the uptake of iodine from the gut, thereby lowering your ability to absorb the iodine from your food. And finally, bromine is taken into cells, where it mimics iodine and again inhibits its function. (In the next section, on wellness, I emphasize a wheat-free diet as a formula for avoiding doctors, but if you do eat wheat, avoid the “brominated” variety. You will see some flour advertised as “never bleached, never brominated.” Iodized salt gives you very little of the daily requirement. In
chapter 11
, I discuss how to supplement your iodine correctly.)

Then there are genetic drift factors, disease, medications, bad diet, and other things that produce hypothyroidism, making it one of the most common disorders, if not the most common, that we see (or ignore) in medicine. The classic test for hypothyroidism is checking the thyroid-stimulating hormone (TSH) levels, which go up as the thyroid fails. The body makes more and more TSH to
try to flog the thyroid gland into doing its thing. Traditionally, the normal range of TSH is based on an “average” level in the population and was thought to be 0.4 to 2.5 (recently lowered from 5.5). But basic science shows that your metabolism and, specifically, your cardiac output (the ability of your heart muscle to pump blood) is not optimum until your TSH is under 1.0. I never thought of myself as being hypothyroid, but having heard a lecture on this subject, I tested my own TSH and found it to be 1.9. Now, using “standard” medical care, this would not be treated. But using the new scientific understanding, I started myself on low-dose thyroid (I had already been taking an iodine supplement). Amazingly, within months my hands and feet warmed up, I regrew my eyelashes, which I thought were just sparse due to age, and my eyebrows filled out. I lost about five pounds and had more exercise tolerance.

But here is the catch. Medical boards are prosecuting doctors for using such knowledge and experience in treating their patients. A general surgery friend of mine gave up surgery practice later in life to do “anti-aging medicine.” Having heard of the actions of medical boards, he approached the Medical Board of California proactively and asked about their approach to treating hypothyroidism. Specifically he asked if it were true that overtreatment was the most common reason for sanctioning doctors. They confirmed that it was. Then he asked them what were their criteria for sanctioning doctors. The police officer on the board – not a physician, the police officer! – responded, “a TSH under 2.5.” My surgeon friend decided to practice in another state, but you see the point. It is not science driving how you are treated by your doctor, but fear of – dare I say, totalitarian – medical boards.

Few things in life are as powerful as peer pressure. Physicians – like football players, stockbrokers, and many others – tend to slap each other on the back (at least figuratively) and aspire to be in the “in crowd.” They reinforce mainstream beliefs at professional meetings and in publications while ignoring the unpopular guys – even
though the ideas of the unpopular guys may ultimately prove correct. Famous examples include the ridicule given the proposals that stomach ulcers come from bacteria, that viruses can cause cancer, that germs cause disease – all of which were proven true. Publication in medical journals, while supposedly peer-reviewed without knowledge of the author, tends to favor those with connections to the reviewers, or at least papers reinforcing the reviewers’ views. I once tried to publish the result of performing a new surgical technique, which I had used successfully in more than seventy patients, only to be told in the written denial, “Everyone knows you can’t do that.” (The technique is now in fairly widespread use.)

Physicians and researchers holding contrarian views may be ostracized, criticized, and actually humiliated. Take, for example, Warren S. Warren, who was rudely roasted at Princeton and whose funding was threatened. Ultimately, his finding of anomalous MRI interactions was proven correct, resulting in improved MRI technology. But not before he faced mockery from his peers.

Recently, Andrew Wakefield, a formerly university-based British gastroenterologist, published a case report series concerning possible side effects of the MMR vaccine. As a result, he has had his reputation impugned, his medical license revoked, and his book censored from publication in Britain. Of course, a case report is only supposed to describe a clinician’s observations, thereby giving others a chance to either confirm the findings or refute them. But Wakefield has been charged with “falsification of data” (a charge he has reliably refuted in court), intent to defraud, and malpractice. Why? Because he made an observation outside the groupthink belief that
all vaccines are safe in all children
. In this case, the groupthink is reinforced by government self-protection and big pharmaceutical company money. (It goes without much saying that government research funding is not generally given to the minority opinion, so again, the same ideas are reinforced.) Whatever the truth is, history tends to uphold the beliefs of those whose writings were censored, not the agencies doing the censoring. In cases such as Dr. Wakefield’s, the abusive treatment
of the physician – simply for reporting his observations – has had a chilling effect on those who might come forward with supportive data. Once again, physicians are afraid to advance real knowledge if it does not conform to the accepted norm. And you, the patient/customer, are given yesterday’s medical information.

Adding insult to injury is the creeping odium of consensus in science – the notion that truth is discovered by majority vote among investigators, not by careful application of testing and scientific method. As Michael Crichton – a physician as well as an author – said in 2003, in a speech at Cal Tech:

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What are relevant are reproducible results. The greatest scientists in history are great precisely because they broke with the consensus. There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.
1

“Best practice,” is the new idea for improving medicine through standardization. It is essentially consensus applied to medicine. University clinicians decide on the best way to treat something; then this is written down in an algorithmic form and disseminated to all doctors. For example, we are told that it is best practice to give antibiotics within forty-five minutes of the surgical incision. So all over the country, hospitals attempt to comply by giving the antibiotic within forty-five minutes of the start of all operations. Predictably, what was first sold as a suggestion now is becoming law, reinforced by government and insurance third-party payers: fail to follow best practice and we will fail to pay you. So now, Medicare penalizes hospitals if the antibiotic is given forty-six minutes before cut time, instead of the maximum of forty-five.

Unfortunately such clinical dogma ignores the fact that people
are individuals with individualized problems. While the algorithmic approach may apply 90 percent of the time and may be a useful learning tool or reference point, the good physician needs to be able to vary treatment when his patient’s problem varies from the norm. In orthopaedics, for example, we are told to “anticoagulate all hip fracture patients” (give blood thinners) because, statistically, patients with hip fractures are at risk of dangerous blood clotting in their legs. But if the patient’s fracture is fixed in a minimally invasive way within hours of the trauma and the patient mobilized the same day, does he or she really need Lovenox or Coumadin, with its attendant risks? Do we thin our blood with chemicals every time we go to sleep? Of course not, and to treat these patients with blood thinner increases their risks for bleeding and hemorrhagic stroke while, at the same time, not really making a difference in their risk of clinically important blood clots. In other words, it adds risk without benefit – a classic formula for bad medical care. Uniformity of thought leads to mediocrity of science and inappropriateness of care.

Evidence-based medicine (EBM) – the latest government/university brainchild – only makes this problem worse. It sounds good. Evidence. What’s not to like? But EBM is an upside-down approach to medical progress. In the past, clinicians faced with a new or unusual medical problem were allowed to think. They were able to offer treatment they thought might be effective as long as the treatment would “first do no harm.” Patient safety always came first. They based their treatment decisions not only on the literature but also on their understanding of basic science, their clinical experience, and their judgment. With EBM, on the other hand, we doctors are
prohibited
from offering treatment unless we can show, preferably with “high-powered” long-term studies, that the treatment is effective. In other words, I might think this new treatment will work for your unique problem, but I have to prove through long-term studies (often taking decades) that my idea actually works
before
I can use it on you, the sick patient. Of course, you will be dead or crippled before this can be done.

A clinician with good common sense and good ideas cannot act without a paper trail backup of some published study. In spite of the acknowledged inadequacies (and actual falsification at times) of the medical literature, all emphasis is placed on these studies, and no credit is given to clinical skill. This has led to incredible statistical gymnastics being applied to collections of studies generating meta-analysis papers that resemble numerology more than clinical medicine. And of course anything that is not a double blind study is questioned. A double blind study is one in which two groups of people are studied – one group is given the real drug, the other group a placebo having no clinical effect (sugar tablet, for example). And it is double-blind because neither the investigator nor the patients know ahead of time who is getting the real or false drug. Now, this makes sense for some things, such as pain medication or diabetic drugs, but not to everything we do in medicine. But when government gets a new hammer, everything starts looking like a nail, and government bureaucrats seldom have the scientific understanding to curdle milk, let alone decide on the effectiveness of complex reconstructive surgical procedures.

Recently some British wags published a parody on this approach, entitled, “Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomized controlled trials.”
2
Their point was that we use parachutes although no powerful double-blind study exists to prove that they work. They recommended at the end that those academicians raising such a ruckus about EBM be the first to volunteer as testers to see if parachutes really make a difference. Of course, if such a study were to be done, the EBM promoter “volunteers” would be pushed (I doubt they would jump) out of a perfectly good aircraft, and neither they nor the study investigator could know ahead of time whether their parachute bags
actually
contained parachutes or if they were “placebos.” As they point out, tongue-in-British-cheek but quite convincingly, EBM really does not apply to everything. Some things, such as appendectomies and using a parachute, are just
commonsense to do. When considering evidence-based medicine, I am reminded of James Thurber’s quote, “You might as well fall flat on your face as to lean too far over backwards.”

Finally, how do we learn new things? It was said of Sir Isaac Newton that, when at Cambridge, he had learned all the science there was to know at the time. Today, it is difficult to stay abreast of even a small portion of available knowledge. And we are particularly ill equipped in medicine to make best use of the knowledge at hand since we approach medical learning much like the processional caterpillar. The processional caterpillar is named because of its habit of following a leader. No one knows how the leader is chosen, but before slithering to or from feeding grounds, the unchosen caterpillars form a line behind the leader. If, however, such caterpillars are placed on the rim of a bucket, the leader will eventually catch up to the end of the line, conclude he has been replaced, and start to follow the caterpillar in front of him until they are all going around and around the bucket rim, following each other over the same ground again and again. What a metaphor for medical education! Residents learn from outdated texts and outdated staff and each other; then they teach interns and medical students, who then become residents; and the knowledge is passed around like the caterpillars on the bucket rim. In 1976, while the biochemists were teaching us that one baby aspirin was optimal, generations of senior residents were teaching interns that they should prescribe two full aspirins – the lesson they had in turn been taught by their senior residents – and it would take years before level one studies would appear to countermand that dictum.

Other books

Glow by Anya Monroe
The Warrior Laird by Margo Maguire
Somebody's Daughter by Jessome, Phonse;
The Midwife's Apprentice by Karen Cushman
Softail Curves III by D. H. Cameron
Take a Chance by Jaine, Simone
Solar Storms by Nicholas Sansbury Smith