How to Read a Paper: The Basics of Evidence-Based Medicine (39 page)

BOOK: How to Read a Paper: The Basics of Evidence-Based Medicine
3.52Mb size Format: txt, pdf, ePub
ads

Improving quality and safety in a particular area of health care typically involves a complex project lasting at least a few months, with input from different staff members (and increasingly, patients and their representatives, too) [5]. The leaders of the project help everyone involved set a goal and work towards it. The fortunes of the project are typically mixed—some things go well, other things not so well, and the initiative is typically written up (if at all) as a story.

For several years now,
BMJ
and
BMJ Quality & Safety
have distinguished research papers (presented as IMRAD—Introduction, Methods, Results and Discussion) from quality improvement reports (presented as COMPASEN—Context, Outline of problem, Measures, Process, Analysis, Strategy for change, Effects of change, and Next steps). In making this distinction, research might be defined as
systematic and focused enquiry seeking truths that are transferable beyond the setting in which they were generated
, while quality improvement might be defined as
real-time, real-world work undertaken by teams who deliver services
.

You might have spotted that there is a large grey zone between these two activities. Some of this grey zone is quality improvement
research
— that is, applied research aimed at building the evidence base on how we should go about quality improvement studies. Quality improvement research embraces a broad range of methods including most of the ones described in the other chapters. In particular, the
mixed method case study
incorporates both quantitative data (e.g. measures of the prevalence of a particular condition or problem) and qualitative data (e.g. a careful analysis of the themes raised in complaint letters, or participant observation of staff going about their duties), all written up in an over-arching story about what was done, why, when, by whom and what the consequences were. If the paper is true quality improvement
research
, it should include a conclusion that offers transferable lessons for other teams in other settings [6] [7].

Incidentally, whilst the story (‘anecdote’) is rightly seen as a weak study design when, say, evaluating the efficacy of a drug, the story format (‘organisational case study’) has unique advantages when the task is to pull together a great deal of complex data and make sense of it, as is the case when an organisation sets out to improve its performance [8].

As you can probably imagine, critical appraisal of quality improvement research is a particularly challenging area. Unlike in randomised trials, there are not hard and fast rules on what the ‘best’ approach to a quality improvement initiative should be, and a great deal of subjective judgements may need to be made about the methods used and the significance of the findings. But as with all critical appraisal, the more papers you read and appraise, the better you will get.

In preparing the list of questions in the next section, I have drawn heavily on the SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines, which are the equivalent of Consolidated Standards of Reporting Trials (CONSORT), Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and so on for quality improvement studies [9]. I was peripherally involved in the development of these guidelines, and I can confirm that they went through multiple iterations and struggles before appearing in print. This is because of the
inherent
challenges of producing structured checklists for appraising complex, multifaceted studies. To quote from the paper by the SQUIRE development group (p. 670):

Unlike conceptually neat and procedurally unambiguous interventions, such as drugs, tests, and procedures, that directly affect the biology of disease and are the objects of study in most clinical research, improvement is essentially a social process. Improvement is an applied science rather than an academic discipline; its immediate purpose is to change human performance rather than generate new, generalizable knowledge, and it is driven primarily by experiential learning. Like other social processes, improvement is inherently context-dependent. […] Although traditional experimental and quasiexperimental methods are important for learning
whether
improvement interventions change behavior, they do not provide appropriate and effective methods for addressing the crucial pragmatic…questions [such as] What is it about the mechanism of a particular intervention that works, for whom does it work, and under what circumstances?

With these caveats in mind, let's see how far we can get with a checklist of questions to help make sense of quality improvement studies.

Ten questions to ask about a paper describing a quality improvement initiative

After I developed the following questions, I applied them to two recently published quality improvement studies, both of which I thought had some positive features but which might have scored even higher if the SQUIRE guidelines had been published when they were being written up. You might like to track down the papers and follow the examples. One is a study by Verdú et al. [10] from Spain, who wanted to improve the management of deep venous thrombosis (DVT) in hospital patients; and the other is a study by May et al. [11] from the USA, who sought to use academic detailing (which Wikipedia defines as ‘non commercially based educational outreach’, see section ‘“Evidence” and marketing’) to improve evidence-based management of chronic illness in a primary care setting.

Question One: What was the context?
‘Context’ is the local detail of the real-world setting in which the work happened. Most obviously, one of our example studies happened in Spain, the other in the USA. One was in secondary care and the other in primary care. We will not be able to understand how these different initiatives unfolded without some background on the country, the health care system and (at a more local level) the particular historical, cultural, economic and micro-political aspects of our ‘case’.
It is helpful, for example, not only to know that May et al.'s academic detailing study was targeted at private general practitioners (GPs) in the USA but also to read their brief description of the particular part of Kentucky where the doctors practised: ‘This area has a regional metropolitan demography reflecting a considerable proportion of middle America (…population 260,512, median household income US $39,813, 19% non-White, 13% below the poverty line, one city, five rural communities and five historically black rural hamlets)’ [11]. So this was an area—‘middle America’—which, overall, was neither especially affluent nor especially deprived, which included both urban and rural areas, and which was ethnically mixed but not dramatically so.
Question Two: What was the aim of the study?
It goes without saying that the aim of a quality improvement study is to improve quality! Perhaps the best way of framing this question is ‘What was the problem for which the quality improvement initiative was seen as a solution?’
In Verdú et al.'s [10] DVT example, the authors are quite upfront that the aim of their quality improvement initiative was to save money! More specifically, they sought to reduce the time patients spent in hospital (‘length of stay’). In the academic detailing example, a ‘rep’ [UK terminology] or ‘detailer’ [US terminology] visited doctors to provide unbiased education and, in particular, to provide evidence-based guidelines for the management of diabetes (first visit) and chronic pain (second visit). The aim was to see whether the academic detailing model, which had been shown as long ago as 1983 to improve practice in
research
trials [11], could be made to work in the messier and less predictable environment of real-world middle America.
Question Three: What was the mechanism by which the authors hoped to improve quality?
This HOW question is all-important. Look back to section ‘Ten questions to ask about a paper describing a complex intervention’ on complex interventions, when I asked (Question Four) ‘What was the theoretical mechanism of action of the intervention?’. This is effectively the same question, although quality improvement initiatives typically have fuzzy boundaries and you should not necessarily expect to identify a clear ‘core’ to the intervention.
In the DVT care pathway example, the logic behind the initiative was that if they developed an integrated care pathway incorporating all the relevant evidence-based tests and treatments in the right order, stipulating who was responsible for each step, and excluding anything for which there was evidence of no benefit, staff would follow it. In consequence, the patient would spend less time in hospital and have fewer unnecessary procedures. Furthermore, sharpening up the pathway would, they hoped, also reduce adverse events (such as haemorrhage).
In the academic detailing example, the ‘mechanism’ for changing doctors' prescribing behaviour was the principles of interpersonal influence and persuasion on which the pharmaceutical industry has built its marketing strategy (and which I spent much of Chapter 6 warning you about). Personally supplying the guidelines and talking the doctors through them would, it was hoped, increase the chance that they would be followed.
Question Four: Was the intended quality improvement initiative evidence-based?
Some measures aimed at improving quality seem like a good idea in theory but actually don't work in practice. Perhaps the best example of this is mergers—that is, joining two small health care organisations (e.g. hospitals) with the aim of achieving efficiency savings, economies of scale, and so on. Fulop's [12] team demonstrated that not only do such savings rarely materialise but merged organisations often encounter new, unanticipated problems. In this example, there is not merely no evidence of benefit but evidence that the initiative might cause harm!
In the DVT example, there is a systematic review demonstrating that overall, in the research setting, developing and implementing integrated care pathways (also known as
critical care pathways
)
can
reduce costs and length of stay [13]. Similarly, systematic reviews have confirmed the efficacy of academic detailing in research trials [14]. In both of our examples, then, the ‘
can
it work?’ question had been answered and the authors were asking a more specific and contextualised question: ‘
does
it work here, with
these
people and
this
particular set of constraints and contingencies?’ [15].
Question Five: How did the authors measure success, and was this reasonable?
At a recent conference, I wandered around a poster exhibition in which groups of evidence-based medicine enthusiasts were presenting their attempts to improve the quality of a service. I was impressed by some, but very disheartened to find that not uncommonly the authors had not formally measured the success of their initiative at all—or even defined what ‘success’ would look like!
Our two case examples did better. Verdú et al. evaluated their DVT study in terms of six outcomes: length of hospital stay, cost of the hospital care, and what they called as
care indicators
(the proportion of patients whose care actually followed the pathway; the proportion whose length of stay was actually reduced in line with the pathway's recommendations; the rate of adverse events; and the level of patient satisfaction). Taken together, these gave a fair indication of whether the quality improvement initiative was a success. However, it was not perfect—for example, the satisfaction questionnaire would not have shaped up well against the criteria for a good questionnaire study in Chapter 13.
In the academic detailing example, a good measure of the success of the initiative would surely have been the extent to which the doctors followed the guidelines or (even better) the impact on patients' health and well-being. But these downstream, patient-relevant outcome measures were not used. Instead, the authors' definition of ‘success’ was much more modest: they simply wanted their evidence-based detailers to get a regular foot in the door of the private GPs. To that end, their outcome measures included the proportion of doctors in the area who agreed to be visited at all; the duration of the visit (being shown the door after 45 s would be a ‘failed’ visit); whether the doctor agreed to be seen on a second or subsequent occasion; and if so, whether he or she could readily locate the guidelines supplied at the first visit.
It could be argued that these measures are the equivalent of the ‘surrogate endpoints’ I discussed in section ‘surrogate endpoints’. But given the real-world context (a target group of geographically and professionally isolated private practitioners steeped in pharmaceutical industry advertising, for whom evidence-based practice was not traditionally part of their core business), a ‘foot in the door’ is a lot better than nothing. Nevertheless, when appraising the paper, we should be clear about the authors' modest definition of success and interpret the conclusions accordingly.
Question Six: How much detail was given about the change process, and what insights can be gleaned from this?
The devil of a change effort is often in the nitty-gritty detail. In the DVT care pathway example, the methods section was fairly short and left me hungry for more. Although I liked many aspects of the paper, I was irritated by this briefest of descriptions of what was actually done to
develop
the pathway: ‘After the design of the clinical pathway, we started the study…’. But
who
designed the pathway, and how? Experts in evidence-based practice—or people working at the front line of care? Ideally, it would have been both, but we don't know. Were just the doctors involved—or were nurses, pharmacists, patients and others (such as or the hospital's director of finance) included in the process? Were there arguments about the evidence—or did everyone agree on what was needed? The more information about
process
we can find in the paper, the more we can interpret both positive and negative findings.
BOOK: How to Read a Paper: The Basics of Evidence-Based Medicine
3.52Mb size Format: txt, pdf, ePub
ads

Other books

Vampire Island by Adele Griffin
A Bravo Homecoming by Christine Rimmer
Jed's Sweet Revenge by Deborah Smith
Salammbo by Gustave Flaubert
Bound Angel Bound Demon by Claire Spoors
Teacher Man: A Memoir by Frank McCourt