Read The Checklist Manifesto Online
Authors: Atul Gawande
We began to hear some encouraging stories, however. In London, during a knee replacement by an orthopedic surgeon who was one of our toughest critics, the checklist brought the team to recognize, before incision and the point of no return, that the knee prosthesis on hand was the wrong size for the patient—and
that the right size was not available in the hospital. The surgeon became an instant checklist proponent.
In India, we learned, the checklist led the surgery department to recognize a fundamental flaw in its system of care. Usual procedure was to infuse the presurgery antibiotic into patients in the preoperative waiting area before wheeling them in. But the checklist brought the clinicians to realize that frequent delays in the operating schedule meant the antibiotic had usually worn off hours before incision. So the hospital staff shifted their routine in line with the checklist and waited to give the antibiotic until patients were in the operating room.
In Seattle, a friend who had joined the surgical staff at the University of Washington Medical Center told me how easily the checklist had fit into her operating room’s routine. But was it helping them catch errors, I asked?
“No question,” she said. They’d caught problems with antibiotics, equipment, overlooked medical issues. But more than that, she thought going through the checklist helped the staff respond better when they ran into trouble later—like bleeding or technical difficulties during the operation. “We just work better together as a team,” she said.
The stories gave me hope.
In October 2008, the results came in. I had two research fellows, both of them residents in general surgery, working on the project with me. Alex Haynes had taken more than a year away from surgical training to run the eight-city pilot study and compile the data. Tom Weiser had spent two years managing development of
the WHO checklist program, and he’d been in charge of double-checking the numbers. A retired cardiac surgeon, William Berry, was the triple check on everything they did. Late one afternoon, they all came in to see me.
“You’ve got to see this,” Alex said.
He laid a sheaf of statistical printouts in front of me and walked me through the tables. The final results showed that the rate of major complications for surgical patients in all eight hospitals fell by 36 percent after introduction of the checklist. Deaths fell 47 percent. The results had far outstripped what we’d dared to hope for, and all were statistically highly significant. Infections fell by almost half. The number of patients having to return to the operating room after their original operations because of bleeding or other technical problems fell by one-fourth. Overall, in this group of nearly 4,000 patients, 435 would have been expected to develop serious complications based on our earlier observation data. But instead just 277 did. Using the checklist had spared more than 150 people from harm—and 27 of them from death.
You might think that I’d have danced a jig on my desk, that I’d have gone running through the operating room hallways yelling, “It worked! It worked!” But this is not what I did. Instead, I became very, very nervous. I started poking through the pile of data looking for mistakes, for problems, for anything that might upend the results.
Suppose, I said, this improvement wasn’t due to the checklist. Maybe, just by happenstance, the teams had done fewer emergency cases and other risky operations in the second half of the study, and that’s why their results looked better. Alex went back and ran the numbers again. Nope, it turned out. The teams had
actually done slightly more emergency cases in the checklist period than before. And the mix of types of operations—obstetric, thoracic, orthopedic, abdominal—was unchanged.
Suppose this was just a Hawthorne effect, that is to say, a byproduct of being observed in a study rather than proof of the checklist’s power. In about 20 percent of the operations, after all, a researcher had been physically present in the operating room collecting information. Maybe the observer’s presence was what had improved care. The research team pointed out, however, that the observers had been in the operating rooms from the very beginning of the project, and the results had not leaped upward until the checklist was introduced. Moreover, we’d tracked which operations had an observer and which ones hadn’t. And when Alex rechecked the data, the results proved no different—the improvements were equally dramatic for observed and unobserved operations.
Okay, maybe the checklist made a difference in
some
places, but perhaps only in the poor sites. No, that didn’t turn out to be the case either. The baseline rate of surgical complications was indeed lower in the four hospitals in high-income countries, but introducing the checklist had produced a one-third decrease in major complications for the patients in those hospitals, as well—also a highly significant reduction.
The team took me through the results for each of the eight hospitals, one by one. In every site, introduction of the checklist had been accompanied by a substantial reduction in complications. In seven out of eight, it was a double-digit percentage drop.
This thing was real.
In January 2009, the
New England Journal of Medicine
published our study as a rapid-release article. Even before then, word began to leak out as we distributed the findings to our pilot sites. Hospitals in Washington State learned of Seattle’s results and began trying the checklist themselves. Pretty soon they’d formed a coalition with the state’s insurers, Boeing, and the governor to systematically introduce the checklist across the state and track detailed data. In Great Britain, Lord Darzi, the chairman of surgery at St. Mary’s Hospital, had meanwhile been made a minister of health. When he and the country’s top designate to WHO, Sir Liam Donaldson (who had also pushed for the surgery project in the first place), saw the study results, they launched a campaign to implement the checklist nationwide.
The reaction of surgeons was more mixed. Even if using the checklist didn’t take the time many feared—indeed, in several hospitals teams reported that it saved them time—some objected that the study had not clearly established
how
the checklist was producing such dramatic results. This was true. In our eight hospitals, we saw improvements in administering antibiotics to reduce infections, in use of oxygen monitoring during operations, in making sure teams had the right patient and right procedure before making an incision. But these particular improvements could not explain why unrelated complications like bleeding fell, for example. We surmised that improved communication was the key. Spot surveys of random staff members coming out of surgery after the checklist was in effect did indeed report a significant increase in the level of communication. There was also a notable correlation between teamwork scores and results for patients—
the greater the improvement in teamwork, the greater the drop in complications.
Perhaps the most revealing information, however, was simply what the staff told us. More than 250 staff members—surgeons, anesthesiologists, nurses, and others—filled out an anonymous survey after three months of using the checklist. In the beginning, most had been skeptical. But by the end, 80 percent reported that the checklist was easy to use, did not take a long time to complete, and had improved the safety of care. And 78 percent actually observed the checklist to have prevented an error in the operating room.
Nonetheless, some skepticism persisted. After all, 20 percent did not find it easy to use, thought it took too long, and felt it had not improved the safety of care.
Then we asked the staff one more question. “If you were having an operation,” we asked, “would you want the checklist to be used?”
A full 93 percent said yes.
We have an opportunity before us, not just in medicine but in virtually any endeavor. Even the most expert among us can gain from searching out the patterns of mistakes and failures and putting a few checks in place. But will we do it? Are we ready to grab onto the idea? It is far from clear.
Take the safe surgery checklist. If someone discovered a new drug that could cut down surgical complications with anything remotely like the effectiveness of the checklist, we would have television ads with minor celebrities extolling its virtues. Detail men would offer free lunches to get doctors to make it part of their practice. Government programs would research it. Competitors would jump in to make newer and better versions. If the checklist were a medical device, we would have surgeons clamoring
for it, lining up at display booths at surgical conferences to give it a try, hounding their hospital administrators to get one for them—because, damn it, doesn’t providing good care matter to those pencil pushers?
That’s what happened when surgical robots came out—drool-inducing twenty-second-century $1.7 million remote-controlled machines designed to help surgeons do laparoscopic surgery with more maneuverability inside patients’ bodies and fewer complications. The robots increased surgical costs massively and have so far improved results only modestly for a few operations, compared with standard laparoscopy. Nonetheless, hospitals in the United States and abroad have spent billions of dollars on them.
But meanwhile, the checklist? Well, it hasn’t been ignored. Since the results of the WHO safe surgery checklist were made public, more than a dozen countries—including Australia, Brazil, Canada, Costa Rica, Ecuador, France, Ireland, Jordan, New Zealand, the Philippines, Spain, and the United Kingdom—have publicly committed to implementing versions of it in hospitals nationwide. Some are taking the additional step of tracking results, which is crucial for ensuring the checklist is being put in place successfully. In the United States, hospital associations in twenty states have pledged to do the same. By the end of 2009, about 10 percent of American hospitals had either adopted the checklist or taken steps to implement it, and worldwide more than two thousand hospitals had.
This is all encouraging. Nonetheless, we doctors remain a long way from actually embracing the idea. The checklist has arrived in our operating rooms mostly from the outside in and from the top down. It has come from finger-wagging health officials, who are regarded by surgeons as more or less the enemy, or from
jug-eared hospital safety officers, who are about as beloved as the playground safety patrol. Sometimes it is the chief of surgery who brings it in, which means we complain under our breath rather than raise a holy tirade. But it is regarded as an irritation, as interference on our terrain. This is my patient. This is my operating room. And the way I carry out an operation is my business and my responsibility. So who do these people think they are, telling me what to do?
Now, if surgeons end up using the checklist anyway, what is the big deal if we do so without joy in our souls? We’re doing it. That’s what matters, right?
Not necessarily. Just ticking boxes is not the ultimate goal here. Embracing a culture of teamwork and discipline is. And if we recognize the opportunity, the two-minute WHO checklist is just a start. It is a single, broad-brush device intended to catch a few problems common to all operations, and we surgeons could build on it to do even more. We could adopt, for example, specialized checklists for hip replacement procedures, pancreatic operations, aortic aneurysm repairs, examining each of our major procedures for their most common avoidable glitches and incorporating checks to help us steer clear of them. We could even devise emergency checklists, like aviation has, for nonroutine situations—such as the cardiac arrest my friend John described in which the doctors forgot that an overdose of potassium could be a cause.
Beyond the operating room, moreover, there are hundreds, perhaps thousands, of things doctors do that are as dangerous and prone to error as surgery. Take, for instance, the treatment of heart attacks, strokes, drug overdoses, pneumonias, kidney failures, seizures. And consider the many other situations that are only seemingly simpler and less dire—the evaluation of a patient
with a headache, for example, a funny chest pain, a lung nodule, a breast lump. All involve risk, uncertainty, and complexity—and therefore steps that are worth committing to a checklist and testing in routine care. Good checklists could become as important for doctors and nurses as good stethoscopes (which, unlike checklists, have never been proved to make a difference in patient care). The hard question—still unanswered—is whether medical culture can seize the opportunity.
Tom Wolfe’s
The Right Stuff
tells the story of our first astronauts and charts the demise of the maverick, Chuck Yeager test-pilot culture of the 1950s. It was a culture defined by how unbelievably dangerous the job was. Test pilots strapped themselves into machines of barely controlled power and complexity, and a quarter of them were killed on the job. The pilots had to have focus, daring, wits, and an ability to improvise—the right stuff. But as knowledge of how to control the risks of flying accumulated—as checklists and flight simulators became more prevalent and sophisticated—the danger diminished, values of safety and conscientiousness prevailed, and the rock star status of the test pilots was gone.
Something like this is going on in medicine. We have the means to make some of the most complex and dangerous work we do—in surgery, emergency care, ICU medicine, and beyond—more effective than we ever thought possible. But the prospect pushes against the traditional culture of medicine, with its central belief that in situations of high risk and complexity what you want is a kind of expert audacity—the right stuff, again. Checklists and standard operating procedures feel like exactly the opposite, and that’s what rankles many people.