How to Read a Paper: The Basics of Evidence-Based Medicine (11 page)

BOOK: How to Read a Paper: The Basics of Evidence-Based Medicine
10.3Mb size Format: txt, pdf, ePub
ads

References

1
Altman DG. The scandal of poor medical research.
BMJ: British Medical Journal
1994;
308
(6924):283.

2
Altman DG. Poor-quality medical research.
JAMA: The Journal of the American Medical Association
2002;
287
(21):2765–7.

3
Godlee F, Jefferson T, Callaham M, et al.
Peer review in health sciences
. London: BMJ Books, 2003.

4
Popper KR.
The logic of scientific discovery
. Abingdon, UK: Psychology Press, 2002.

5
Anon. Randomised trial of intravenous streptokinase, aspirin, both, or neither among 17187 cases of suspected acute myocardial infarction: ISIS-2. (ISIS-2 Collaborative Group).
Lancet
1988;
ii
:349–60.

6
Lee A, Joynt GM, Ho AM, et al. Tips for teachers of evidence-based medicine: making sense of decision analysis using a decision tree.
Journal of General Internal Medicine
2009;
24
(5):642–8.

7
Drummond MF, Sculpher MJ, Torrance GW.
Methods for the economic evaluation of health care programs
. Oxford: Oxford University Press, 2005.

8
Fletcher W. Rice and beriberi: preliminary report of an experiment conducted at the Kuala Lumpur Lunatic Asylum.
Lancet
1907;
1
:1776.

9
Sterne JA, Egger M, Smith GD. Systematic reviews in health care: investigating and dealing with publication and other biases in meta-analysis.
BMJ: British Medical Journal
2001;
323
(7304):101.

10
Cuff A. Sources of Bias in Clinical Trials. 2013.
http://applyingcriticality.wordpress.com/2013/06/19/sources-of-bias-in-clinical-trials/
(accessed 26th June 2013).

11
Kaptchuk TJ. The double-blind, randomized, placebo-controlled trial: gold standard or golden calf?
Journal of Clinical Epidemiology
2001;
54
(6):541–9.

12
Berwick D. Broadening the view of evidence-based medicine.
Quality and Safety in Health Care
2005;
14
(5):315–6.

13
McCormack J, Greenhalgh T. Seeing what you want to see in randomised controlled trials: versions and perversions of UKPDS data. United Kingdom prospective diabetes study.
BMJ: British Medical Journal
2000;
320
(7251):1720–3.

14
Eldridge S. Pragmatic trials in primary health care: what, when and how?
Family Practice
2010;
27
(6):591–2 doi: 10.1093/fampra/cmq099.

15
Doll R, Hill AB. Mortality in relation to smoking: ten years' observations of British doctors.
BMJ: British Medical Journal
1964;
1
(5395):1399.

16
Doll R, Peto R. Mortality in relation to smoking: 20 years' observations on male British doctors.
BMJ: British Medical Journal
1976;
2
(6051):1525.

17
Doll R, Peto R, Wheatley K, et al. Mortality in relation to smoking: 40 years' observations on male British doctors.
BMJ: British Medical Journal
1994;
309
(6959):901–11.

18
Doll R, Peto R, Boreham J, et al. Mortality in relation to smoking: 50 years' observations on male British doctors.
BMJ: British Medical Journal
2004;
328
(7455):1519.

19
Guillebaud J, MacGregor A.
The pill and other forms of hormonal contraception
. USA: Oxford University Press, 2009.

20
McBride WG. Thalidomide and congenital abnormalities.
Lancet
1961;
2
:1358.

21
Soares-Weiser K, Paul M, Brezis M, et al. Evidence based case report. Antibiotic treatment for spontaneous bacterial peritonitis.
BMJ: British Medical Journal
2002;
324
(7329):100–2.

22
Owens DK, Lohr KN, Atkins D, et al. AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions—agency for healthcare research and quality and the effective health-care program.
Journal of Clinical Epidemiology
2010;
63
(5):513–23 doi: 10.1016/j.jclinepi.2009.03.009.

23
Howick J, Chalmers I, Glasziou P, et al.
The 2011 Oxford CEBM levels of evidence (introductory document)
. Oxford: Oxford Centre for Evidence-Based Medicine, 2011.

24
Slowther A, Boynton P, Shaw S. Research governance: ethical issues.
JRSM: Journal of the Royal Society of Medicine
2006;
99
(2):65–72.

25
Shaw S, Boynton PM, Greenhalgh T. Research governance: where did it come from, what does it mean?
JRSM: Journal of the Royal Society of Medicine
2005;
98
(11):496–502.

26
Shaw S, Barrett G. Research governance: regulating risk and reducing harm?
JRSM: Journal of the Royal Society of Medicine
2006;
99
(1):14–9.

27
Warlow C. Over-regulation of clinical research: a threat to public health.
Clinical Medicine
2005;
5
(1):33–8.

28
Snooks H, Hutchings H, Seagrove A, et al. Bureaucracy stifles medical research in Britain: a tale of three trials.
BMC Medical Research Methodology
2012;
12
(1):122.

Chapter 4

Assessing methodological quality

As I argued in section ‘The science of “trashing” papers’, a paper will sink or swim on the strength of its methods section. This chapter considers five essential questions which should form the basis of your decision to ‘bin’ it outright (because of fatal methodological flaws), interpret its findings cautiously (because the methods were less than robust) or trust it completely (because you can't fault the methods at all). These five questions—was the study original, whom is it about, was it well designed, was systematic bias avoided (i.e. was the study adequately ‘controlled’) and was it large enough and continued for long enough to make the results credible—are considered in turn.

Was the study original?

There is, in theory, no point in testing a scientific hypothesis that someone else has already proved one way or the other. But in real life, science is seldom so cut and dried. Only a tiny proportion of medical research breaks entirely new ground, and an equally tiny proportion repeats exactly the steps of previous workers. The majority of research studies will tell us (if they are methodologically sound) that a particular hypothesis is slightly more or less likely to be correct than it was before we added our piece to the wider jigsaw. Hence, it may be perfectly valid to do a study that is, on the face of it, ‘unoriginal’. Indeed, the whole science of meta-analysis depends on there being more than one study in the literature that have addressed the same question in pretty much the same way.

The practical question to ask, then, about a new piece of research, is not ‘has anyone ever conducted a similar study before?’, but ‘does this new research add to the literature in any way?’ A list of such examples is given here.

 
  • Is this study bigger, continued for longer, or otherwise more substantial than the previous one(s)?
  • Are the methods of this study any more rigorous (in particular, does it address any specific methodological criticisms of previous studies)?
  • Will the numerical results of this study add significantly to a meta-analysis of previous studies?
  • Is the population studied different in any way (e.g. has the study looked at different ethnic groups, ages or gender than have previous studies)?
  • Is the clinical issue addressed of sufficient importance, and does there exist sufficient doubt in the minds of the public or key decision-makers, to make new evidence ‘politically’ desirable even when it is not strictly scientifically necessary?

Whom is the study about?

One of the first papers that ever caught my eye was entitled ‘But will it help
my
patients with myocardial infarction?’ [1]. I don't remember the details of the article, but it opened my eyes to the fact that research on someone else's patients may not have a take-home message for my own practice. This is not mere xenophobia. The main reasons why the participants (Sir Iain Chalmers has argued forcefully against calling them ‘patients’) [2] in a clinical trial or survey might differ from patients in ‘real life’ are listed here.

a.
They were more, or less, ill than the patients you see.
b.
They were from a different ethnic group, or lived a different lifestyle, from your own patients.
c.
They received more (or different) attention during the study than you could ever hope to give your patients.
d.
Unlike most real-life patients, they had nothing wrong with them apart from the condition being studied.
e.
None of them smoked, drank alcohol or were taking the contraceptive pill.

Hence, before swallowing the results of any paper whole, here are some questions that you should ask yourself.

1.
How were the participants recruited?
If you wanted to do a questionnaire survey of the views of users of the hospital casualty department, you could recruit respondents by putting an ad in the local newspaper. However, this method would be a good example of
recruitment bias
because the sample you obtain would be skewed in favour of users who were highly motivated to answer your questions and liked to read newspapers. You would do better to issue a questionnaire to every user (or to a one in ten sample of users) who turned up on a particular day.
2.
Who was included in the study?
In the past, clinical trials routinely excluded people with coexisting illness, those who did not speak English, those taking certain other medication and people who could not read the consent form. This approach may be experimentally clean but because clinical trial results will be used to guide practice in relation to wider patient groups, it is actually scientifically flawed. The results of pharmacokinetic studies of new drugs in 23-year-old healthy male volunteers will clearly not be applicable to the average elderly female! This issue, which has been a bugbear of some doctors and scientists for decades, has more recently been taken up by the patients themselves, most notably in the plea from patient support groups for a broadening of inclusion criteria in trials of anti-AIDS drugs [3].
3.
Who was excluded from the study?
For example, an randomised controlled trial may be restricted to patients with moderate or severe forms of a disease such as heart failure—a policy that could lead to false conclusions about the treatment of
mild
heart failure. This has important practical implications when clinical trials performed on hospital outpatients are used to dictate ‘best practice’ in primary care, where the spectrum of disease is generally milder.
4.
Were the participants studied in

real-life

circumstances?
For example, were they admitted to hospital purely for observation? Did they receive lengthy and detailed explanations of the potential benefits of the intervention? Were they given the telephone number of a key research worker? Did the company who funded the research provide new equipment that would not be available to the ordinary clinician? These factors would not invalidate the study, but they may cast doubts on the applicability of its findings to your own practice.

Was the design of the study sensible?

Although the terminology of research trial design can be forbidding, much of what is grandly termed
critical appraisal
is plain common sense. Personally, I assess the basic design of a clinical trial via two questions.

What specific intervention or other manoeuvre was being considered, and what was it being compared with?
This is one of the most fundamental questions in appraising any paper. It is tempting to take published statements at face value, but remember that authors frequently misrepresent (usually subconsciously rather than deliberately) what they actually did, and overestimate its originality and potential importance. In the examples in
Table 4.1
, I have used hypothetical statements so as not to cause offence, but they are all based on similar mistakes seen in print.
BOOK: How to Read a Paper: The Basics of Evidence-Based Medicine
10.3Mb size Format: txt, pdf, ePub
ads

Other books

India's Summer by Thérèse
The Navigator by Pittacus Lore
Switch by William Bayer
Girls Fall Down by Maggie Helwig
Curse of the Jade Lily by David Housewright
Rock Me Slowly by Dawn Sutherland