Intended for healthcare professionals

Information In Practice

Evaluating information technology in health care: barriers and challenges

BMJ 1998; 316 doi: https://doi.org/10.1136/bmj.316.7149.1959 (Published 27 June 1998) Cite this as: BMJ 1998;316:1959
  1. Heather Heathfield, senior lecturer (H.Heathfield{at}doc.mmu.ac.uk)a,
  2. David Pitty, director of clinical ITb,
  3. Rudolph Hanka, directorc
  1. a Department of Computing and Mathematics, Manchester Metropolitan University, Manchester,
  2. b Healthcare Informatics Team, Royal Brompton Hospital, London,
  3. c Medical Informatics Unit, Institute of Public Health, University of Cambridge, Cambridge
  1. Correspondence to: Dr Heathfield, Medical Informatics Unit, Institute of Public Health, University of Cambridge, Cambridge CB2 2SR
  • Accepted 28 April 1998

There is strong push for clinical leadership in the development and procurement of information technology in health care.1 The lack of clinical input to date has been cited as a major factor in the failure of information technology in health services2 and has prompted many clinicians to become involved in such endeavours. Furthermore, there are various clinical decision support systems available, the merits of which clinicians are expected to judge (such as Prodigy3 and Capsule4).

It is essential that clinicians have a knowledge of evaluation issues in order that they can assess the strengths and weaknesses of evaluation studies and thus interpret their results meaningfully, and also contribute to the design and implementation of such studies to provide them with useful information.

Summary points

Clinicians are becoming increasingly involved in the development and procurement of information technology in health care, yet evaluation studies have provided little useful information to assist them

Evaluations by means of randomised controlled trials have not yet provided any major indication of improved patient outcomes or cost effectiveness, are difficult to generalise, and do not provide the scope or detail necessary to inform decision making

Clinical information systems are a different kind of intervention from drugs, and techniques used to evaluate drugs (particularly randomised controlled trials) are not always appropriate

The challenge for clinical informatics is to develop multi-perspective evaluations that integrate quantitative and qualitative methods

Evaluation is not just for accountability but to improve our understanding of the role of information technology in health care and our ability to deliver systems that offer a wide range of clinical and economic benefits

The evaluation dilemma

Decision makers may be swayed by the general presumption that technology is of benefit to health care and should be wholeheartedly embraced. This view is supported by assertions such as that general practitioner computing is seen “as an integral part of the NHS IT strategy,”5 the US Institute of Medicine's statement that computing is “an essential technology for healthcare,”6 and the increasingly high levels of spending on healthcare information technology. On the other hand, decision makers may support the argument that procurement of information technology should be based on the demonstration, in randomised controlled trials, of economic benefits or positive effects on patient outcomes.7-12).

Regardless of which view you take, evidence is scarce. Large scale pilot initiatives such as the NHS electronic patient record project have yielded only anecdotal evidence, with little or no credence given to results of external evaluation (“We now know how to do it and it is achievable in the NHS”13). Results from economic analyses and randomised controlled trials of healthcare systems are emerging, but these studies cover only a small fraction of the total number of healthcare applications developed and address a limited number of questions, and most show no benefits to patient outcomes (D L Hunt et al, Proceedings of the 5th Cochrane Colloquium, Amsterdam, October 1997).14

Those who base their judgment on the failure of randomised controlled trials to show improved outcomes may cause important projects to be prematurely abandoned and funding to be discontinued. In contrast, those who heed the proponents of healthcare information technology and base their decisions on unsubstantiated reports of projects, written without external verification, may waste precious NHS resources through the inappropriate and uninformed application of information technology. This is likely to result in repeated failure without retrospective insight, and so does nothing to further the science of system development and deployment. The problem is confounded by the fact that negative results are seen as unacceptable and do not generally become public, thus failing to facilitate knowledge for future developments.

Problems with inappropriate evaluations

Evaluation can be viewed as having a severe negative impact on the progress of clinical information technology because, in our opinion, many evaluation studies ask inappropriate questions, apply unsuitable methods, and incorrectly interpret results. The evaluation questions most often asked include those concerning economic benefits and clinical outcomes, despite the lack of strong evidence of such and the recognition of the difficulty of applying results in other contexts.15 The misplaced notion that clinical information technology is comparable to a drug and should be evaluated as one has led to the idea that the randomised controlled trial is the optimal method of investigation.16 While a major deterrent to the use of randomised controlled trials has been their cost, they are also vulnerable with respect to external validity: trial results may not be relevant to the full range of subjects (that is, specific implementations of a healthcare application) or typical uses of a system in day to day practice, and they are likely to cover only a small proportion of the wide range of potential healthcare applications. Furthermore, negative results from such trials cannot help us understand the effects of clinical systems or build better ones in the future.

New directions in evaluation

New perspectives on evaluation are emerging in the domain of health care. Most important is the recognition that randomised controlled trials cannot address all issues of evaluation and that a range of approaches is desirable (Heathfield et al, Proceedings of HC96, Harrogate, 1996).17 As pointed out by McManus, “Can we imagine how randomised controlled trials would ensure the quality and safety of modern air travel …? Whenever aeroplane manufacturers wanted to change a design feature … they would make a new batch of planes, half with the feature and half without, taking care not to let the pilot know which features were present.”18 Others have sought to find surrogate process measures that may be used instead of “prohibitive” outcome measures, thus making randomised controlled trials more cost effective.19

Likewise, workers in clinical informatics have questioned the usefulness of conducting randomised controlled trials on clinical systems. The demonstration of quantifiable benefits in a randomised controlled trial does not necessarily mean that end users will accept a system into their working practices. Research shows that satisfaction with information technology is more correlated with users' perceptions about a system's effects on productivity than its effect on quality of care.20-22

These insights have highlighted the need to examine professional and organisational factors in system evaluation and have led to the concept of multi-perspective, multi-method evaluations, which seek to address a number of issues with multiple methods and with evaluators from different backgrounds working together to produce an integrated evaluation. This is coupled with an awareness of the importance of qualitative methods in system evaluation.23-26 The NHS electronic patient record project is an example of a large, multi-perspective evaluation, which includes social scientists, health economists, computer scientists, health service managers, and psychologists and uses a wide range of different methods. However, the problems of conducting large scale evaluations of this type show the need for careful planning in such studies.27

Challenges for evaluating information technology in health care

Clinical systems are embedded social systems with different people, institutions, providers, settings, and so on. While it is important that we search for causal mechanisms that lead to clinical outcomes, the investigation and, possibly, classification of such contexts is essential. This will help us to understand and predict the behaviour of systems and provide important knowledge to inform further developments. This form of research will be facilitated by refocusing attention from debates about specific methods towards issues of multi-method evaluation and the integration of methods and results.

Conclusions

The arguments for performing multi-method evaluations must be acknowledged and progressed within the community. Information technology is not a drug and should not be evaluated as such. We should look to the wider field of evaluation disciplines, in which many of the issues now facing clinical informatics have been addressed.

The current political context in which healthcare applications are evaluated emphasises economic gains rather than quality of life. Thus, the role of evaluation has been to justify past expenditures to taxpayers, managers, etc, and so evaluation becomes a way of trying to rebuild lost public trust. This is short sighted. Evaluation is not just for accountability, but for development and knowledge building in order to improve our understanding of the role of information technology in health care and our ability to deliver high quality systems that offer a wide range of clinical and economic benefits.

Acknowledgments

Funding: None.

Conflict of interest: None.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.