Intended for healthcare professionals

Editorials

Unbiased, relevant, and reliable assessments in health care

BMJ 1998; 317 doi: https://doi.org/10.1136/bmj.317.7167.1167 (Published 31 October 1998) Cite this as: BMJ 1998;317:1167

Important progress during the past century, but plenty of scope for doing better

  1. Iain Chalmers, Director
  1. UK Cochrane Centre, Oxford OX2 7LG

    C ausal inferences about the effects of treatments must always depend on best judgments. Because the lives and wellbeing of patients will be influenced for better or worse by the validity of these judgments, however, it is important to be explicit about the logic as well as the empirical evidence on which the judgments are based. This issue of the BMJ is about one important aspect of that logic—the attempt to control bias through randomisation.

    There is a growing acceptance that it is logical to try to control biases of various kinds when assessing the effects of treatments. Efforts by clinicians to control biases stretch back for at least three centuries,1 but only during the past 100 years have these become widespread. In particular, as we approach the end of the 20th century, there are now hundreds of thousands of reports of studies in which efforts have been made to control selection biases, the aim here being to distinguish differences attributable to treatments from differences that reflect the characteristics (known and unknown) of the people who have received treatment.

    These studies are known as randomised trials because eligible patients are allocated at random to one of two or more alternative forms of care. This is their sole defining characteristic.2 Other measures sometimes used to control biases—for example, the use of placebos to minimise observer biases—are neither specific to nor necessary features of randomised trials.

    Consensus is growing that the results of randomised trials provide the most secure basis for valid causal inferences about the effects of treatments.3 Not everyone subscribes to this view,4 however, and there are certainly aspects of the design and interpretation of randomised trials which continue to present real challenges.5 6 The results of randomised trials usually differ from those of studies in which the comparison groups have been assembled in other ways.7 Although the most likely explanation for these differences would seem to be uncontrolled biases, other explanations cannot be ruled out.8

    Two studies stand out in the history of efforts to control selection biases in clinical research. In 1898 a Danish physician, Johannes Fibiger, allocated patients with diphtheria to comparison groups on the basis of day they were admitted to hospital. He gave anti-diphtheria serum to patients admitted on alternate days and compared their progress with that of those admitted on other days. Fibiger's report is remarkable not only because it shows that he was conscious of the need to control selection biases but also because he described his methods and analyses so clearly.9 10

    Whether the basis for allocating patients in an unselected series to comparison groups is alternation or random numbers, failure to adhere strictly to the allocation schedule may result in bias.11 12Fifty years ago yesterday, the BMJcarried the report of another landmark study in the history of efforts to control selection biases—the UK Medical Research Council's randomised trial of streptomycin for pulmonary tuberculosis.1315 The report is especially important because it describes in detail the precautions taken by the researchers to conceal the allocation schedule from those entering patients into the trial.13

    Randomised trials conducted over the past half century have helped to bring about a situation in which health care has been credited with three of the seven years of increased life expectancy over that time and an average of five additional years of partial or complete relief from the poor quality of life associated with chronic disease.16 But we should not be complacent. Systematic reviews of some of the hundreds of thousands of reports of trials published since 1948 are beginning to make painfully clear that, in most of these studies, inadequate steps were taken to control biases,many questions and outcomes of interest to patients were ignored,17 and insufficient numbers of participants were studied to yield reliable estimates of treatment effects.18 In brief, a massive amount of research effort,the goodwill of hundreds of thousands of patients, and millions of pounds have been wasted.

    Several developments could help to ensure that efforts over the next 50 years will be more effective in yielding unbiased, relevant, and reliable assessments of the effects of health care. Information derived from systematic reviews of past research19 and from registers of continuing trials20 will help to show where new trials are needed and how best to maximise the quality and relevance of the new information sought. Some of this information is likely to be in the form of qualitative data, and this implies the need for greater cooperation among clinical and social scientists in designing and running trials.21

    Electronic publication will offer opportunities for improving the quality of research and of research reports22 through open peer review of protocols and reduction of publication bias, and by providing a mechanism through which the results of new studies can be set properly within the context of other relevant studies.23 Improvements in the infrastructure needed to support trials24 should mean that clinicians and patients faced with uncertainties about the relative merits of treatment options will more often be able to participate in the research needed to resolve these uncertainties.

    The greatest potential for improving research may lie in greater public involvement. Partly because of perverse incentives to pursue particular research projects25 26researchers often seem to design trials to address questions that are of no interest to patients. Greater public involvement could help to reduce this mismatch and ensure that trials are designed to address questions that patients see as relevant. More generally, it will be important to assess whether the public understands and endorses the efforts being made to control biases in assessing the effects of health care.27 So far, the research community has made very little effort to involve the public in discussions about this. All in all, there is plenty of scope for building on the undoubted progress made during the past century.

    Acknowledgments

    I thank Doug Altman, Mike Bracken, Ray Garry, Peter G⊘tzsche, Andrew Herxheimer, Tony Hope, Muir Gray, and Ann Oakley for helpful comments on an earlier draft of this article.

    References

    View Abstract