Read Saving My Knees: How I Proved My Doctors Wrong and Beat Chronic Knee Pain Online
Authors: Richard Bedard
Tags: #Health
The first X-ray image dates all the way back to 1895; the first MRI scan of a human subject didn’t occur until 1977. Three years later, the first commercial MRI machine was manufactured. A couple of decades after that, the technology became widely used in clinical studies of cartilage and osteoarthritic joints.
Further refinements in magnetic-imaging technology may reveal surprises in how knee cartilage changes over time. At least two studies using MRIs have already spotted one puzzling trend: the worst cartilage defects stand the best chance of getting better!
In one study, published in 2006, Australian researchers tracked changes in the knees of eighty-four healthy people. Various locations in the joints were graded using the standard zero-to-four scale. The subjects started out with a total of nineteen spots of really bad cartilage that scored either a “three” or “four” (meaning that the tissue was at least half worn away, or even gone altogether).
Amazingly, over a two-year period, these sites recovered the best. More than half of them improved—one went from bare bone to full thickness. About a fifth stayed the same, and about a quarter got worse.
That blew me away. In fact, there was a lot of information I uncovered that really buoyed my hopes. Cartilage could become stronger. It could also thicken over time. And if it was completely worn away at a certain location, it could even regrow again and often did.
Later, it occurred to me that I shouldn’t have been so astonished. Why shouldn’t cartilage be able to heal and adapt? The rest of the body does. I play guitar, so the skin on my fingertips thickens into calluses to withstand the pressure of the thin metal strings. Astronauts in outer space rapidly lose bone density from living in a weightless environment.
With my newly gained knowledge about how knees work and how cartilage can improve, I felt more confident about charting a path to recovery. At last it was time to devise my own experiment to get better.
It wouldn’t be easy to develop a plan to heal my knees. For one, I would have to make a realistic appraisal of my condition and think deeply about what I reasonably could and couldn’t achieve. Also I’d have to find some suitable exercise to put in motion a pair of knees that didn’t like doing much of anything.
But first, there was one nagging, unresolved matter to confront: after reading a mound of scientific studies, did I really know what I thought I did?
All along, I had been wary about the inclination, especially in a field as fertile as knee research, to cherry-pick the studies I happened to agree with. It seemed intellectually dishonest. If one researcher says bad knees benefit from exercise and another says they don’t, what do you have at the end of the day?
I had already reviewed a Swedish study that claimed exercise helps bad knees. That left the contrary position to consider: that exercise either was harmful or didn’t matter either way. Any well-conducted investigation that reached either conclusion would send me spinning back to square one. I would have to re-examine a lot of my assumptions before creating a program to get better.
I found the perfect foil in a study whose impressive size gave its conclusions more heft. That’s because too-small sample sizes can skew results, as any good statistician knows. For instance, it’s premature to run full-page magazine ads claiming that a drug cures sixty-seven percent of subjects based on a trial consisting of three people, two of whom got better. Maybe in a second set of three, no one will benefit.
The Framingham Study, which looked at the relationship between activity and knee osteoarthritis, didn’t have a problem at all with sample size. It boasted 1,279 subjects, far more than any other study I’d looked at. That alone ensured its results would receive widespread notice. A major finding: “Walking for exercise and other recreational activities in older persons without knee OA (osteoarthritis) do not affect these individuals’ risk of developing OA, even if they are overweight.”
In other words, older Americans, whether skinny or fat, don’t have to worry that those tennis matches or weekend strolls along the beach boost their risk of developing arthritis in their knees. The Framingham Study attracted plenty of attention; I heard it mentioned while listening to a Hong Kong radio station one day. A doctor, whose syndicated health segment aired daily, cited the research to reassure listeners that exercise was safe for their joints.
After examining the article by the Framingham researchers, published in 2007 in
Arthritis Care and Research
, I viewed their conclusion differently. Sure, you could put a positive spin on the investigation, but you could just as easily highlight the negative. Namely, if exercise doesn’t affect the risk one way or the other, that means physical activity isn’t bad, but isn’t good either. Indeed, researchers said their evidence suggests that “exercise does not protect” against the onset of osteoarthritis.
That struck me as peculiar. Cartilage loss contributes to the progession of the disease. Exercise should help buffer the tissue against breaking down by making it stronger and more elastic. It became clear I needed to delve into the methods of the Framingham Study to understand how the researchers found what they did.
Their methodology seemed straightforward enough. The subjects, who were older and largely overweight, had their knees X-rayed. At some point, they filled out a survey about their level of activity and the intensity. Then about nine years after an initial exam, their knees were X-rayed again to scan for signs of osteoarthritis. They also answered questions to determine if they had common symptoms of the disease, such as stiff joints.
After collecting the data, the researchers spun off the results into several complex-looking tables with numerous categories. You could trace along columns and rows to locate, for example, the number of heavy people who walked at least six miles a week who wound up with symptoms of knee osteoarthritis. Or you could find the number of thinner people who did sweat-inducing exercise less than three times weekly and whose X-rays indicated they developed the disease. There was a lot of information to wade through.
The longer I stared at the figures, the more puzzled I grew. In places they resembled a bunch of one-way signs that a tipped-over truck had dumped in the middle of the highway. They pointed this way and that. Take for instance the subjects’ level of activity, as compared with peers’. One table suggested that being active was good for heavy people: it may protect them from symptoms of osteoarthritis. At the same time the results paradoxically suggested that being inactive was good for them too: it may prevent joint-space narrowing, a hallmark of the disease.
While the conclusions indicated by the numbers never quite achieved statistical significance, there was enough oddness to inspire me to look deeper. The Framingham Study, after all, didn’t really discover any relationships between anything, except for the obvious one, that fatter people got knee osteoarthritis more often. If no relationships exist in the first place, that makes sense. However, I began pursuing an alternative explanation: that the study had serious limitations.
The researchers did concede a shortcoming in their choice of a measuring tool. Their investigation began in 1993, more than a decade earlier. They used an X-ray machine to assess the knee joints instead of the much more revealing and precise MRI scanner. That meant they couldn’t see the state of the soft tissues.
A bigger problem seemed evident in the study’s methodology. The subjects filled out a one-time survey about how active they were: Did they walk for exercise at all? If so, less than six miles a week? More than that? Did they work up a sweat through intense physical activity? If so, less then three times a week? More than that?
Their answers determined which categories they landed in. And that’s where they remained, no matter what else happened between X-rays. If their exams were six months apart, or a year, or even two years, it wouldn’t seem like a big deal. But on average they weren’t examined a second time
until nine years later
.
So let’s say Bill discloses on his survey that he breaks a sweat at least three times a week because he recently started running. Suppose he abandons the sport after only six months. For the other eight and a half years between checkups, he mostly lies around watching ESPN and eating donuts. Then the X-ray finds osteoarthritis in his knees.
In the Framingham Study, Bill’s case would be used to support the thesis that breaking a sweat at least three times a week may contribute to developing arthritis in the knee. That’s simply because the researchers sampled his activities at a misrepresentative point during that long time span.
Of course the counterargument is that, deep enough into adulthood, athletes tend to remain athletes, and non-athletes tend to remain non-athletes. That’s a fair point. But even so, lives change a lot over nine years. People marry. They divorce. They move to new homes. Their schedules change: the new job perhaps leaves less time for the daily walk. Their bodies change too: they may substitute easy hiking (no sweat) for running (lots of sweat).
It wouldn’t be surprising if most of those Framingham Study subjects actually skipped through several different categories between their original and final X-rays. Over the nine years before my chronic knee pain began, I would have hopscotched through at least three. During that time I had periods of inactivity, mild activity, and lots of activity.
Surveying variable habits only once during a nine-year study seems problematic. Still, an even more fundamental issue overshadows that, I realized. The inescapable truth is, it’s almost impossible to design a good long-term study involving people that answers the question, “Does exercise help prevent knee osteoarthritis?”
A well-designed and well-executed investigation would allow a clean line to be drawn between cause and effect. In the real world, where human knees are attached to living, breathing people, that’s hard to do. To see why, start by considering a fictional experiment that successfully proves a certain (unexpected) cause and effect.
The experiment involves two groups of four rats in identical cages. They’re all young, healthy, and bright-eyed. The thesis: a brand-new reddish-colored food called “Randy’s Rat Vittles” promotes long, healthy lives. The rats in cage one are fed a standard brown rat chow. The rats in cage two receive Randy’s Rat Vittles.
The rats in the first cage gobble up their grub and ten seconds later are sniffing about normally. The rats in the second cage chomp down the red food and ten seconds later keel over dead.
Various scenarios could explain the sudden die-off in cage two. Perhaps all four rats had hidden heart defects and, in one of those strange coincidences, happened to expire at once. The most likely explanation though hits us with the force of a mallet landing squarely on the crown of the head: there’s something bad in Randy’s Rat Vittles that killed them.
Reasonable people don’t dispute what occurred or why. That’s because this experiment features two critical things: a well-controlled environment and clear, easy-to-measure results.
When the entire second set of rats dies, that’s a clear outcome. Death is the ultimate adverse reaction to a drug or food. Also the rodents lived exactly ten seconds after eating the rat vittles. That’s an easy-to-measure and significant shortening, not lengthening, of their lives.
What’s more, the well-controlled environment allows the key variable to be isolated. Both groups of rats occupy identical cages. Further, the little drama unfolds so quickly that the rats in the second cage don’t have time to disappear into different habitats, mingle with others, munch on other things. The only significant variable is what they just ate.
This hypothetical experiment produces a slam-dunk conclusion, that Randy’s Rat Vittles radically cut short the lives of rodents that consumed it. (The product would be more fittingly marketed as Randy’s Rat
Poison
.) Researchers would love to design a human study on exercise and knees that could provide such indisputable clarity. But this turns out to be fiendishly tricky.
For starters, measuring outcomes isn’t as simple. To determine how exercise affects knees, ideally a researcher should examine the joint and tissues firsthand. Salter’s team did so for its continuous passive motion study that used rabbits. The published report included wonderful images of the animals’ knee cartilage. The tissue was dried, sliced, stained, and inspected under a microscope. This yielded valuable information about the quality of the cartilage that filled in the holes that had been drilled.
The rabbits didn’t mind: they were dead. They all got an intravenous overdose of pentobarbital. Of course, in human trials, subjects wouldn’t have to be killed; surgery could be performed instead to inspect joints and harvest samples. Still, that carries its own dangers and crosses ethical boundaries, so it isn’t generally done.
That leaves medical imaging technology to do the measuring. Devices such as MRI scanners will continue to improve. They will supply increasingly accurate and comprehensive details—though, to be honest, studying a computer-derived picture of a thing is never as good as studying the thing itself.
What if this weren’t an issue? What if changes in knee joints could be measured perfectly? A bigger headache awaits the determined researcher: it’s practically impossible to ensure a well-controlled environment for long-term experiments with people.
Human subjects aren’t like Salter’s rabbits, who either received the same kind of cast or moved about in the same kind of cage or were hooked up to identical continuous passive motion machines. They enroll in a study, then are promptly set free in the world, to do basically whatever they want to.
In the course of going about their lives, you can bet on one thing: they’ll use those knee joints. A lot. In a lot of different ways too. Some of them will end up walking fifty yards to get to work, others a quarter of a mile (or even a mile, as I did in Hong Kong). Their jobs will affect how much movement their knees get, from data entry clerks (inactive) to warehouse laborers (active). So will their leisure activities, from roaming the mall to reading magazines on the backyard hammock.
A large portion of their movement doesn’t qualify as “exercise” though, creating a rather ticklish problem. As a researcher, how do you successfully isolate the variable of “exercise” when it’s such a relatively small input into the equation of how we use our knees? Someone who exercises by walking six miles a week spends less than two percent of his waking hours on “exercise.” Shouldn’t it matter what his knee joints are doing the other ninety-eight percent of the time?
Just look at two hypothetical people, Mary and Ted. Mary reports doing no walking for exercise and appears inactive. However, she typically averages 7,000 steps daily as a young mother, chasing her toddler about, running errands, visiting friends. Ted walks a mile a day to exercise and thus seems active. Yet he works a desk job and averages only 6,000 steps a day. Is Ted more active just because he “exercises”?
Suppose researchers could overcome this obstacle and capture the exact number of steps that subjects take every day. Even that may not be enough detail. One footfall doesn’t necessarily transmit the same force into knee joints as another. The impact of sixty steps can differ widely, depending on whether they occur during a leisurely amble to the mailbox or an ill-advised sprint to catch a public transit bus that’s about to pull away.