Intended for healthcare professionals

Primary Care

Attitudes to the public release of comparative information on the quality of general practice care: qualitative study

BMJ 2002; 325 doi: https://doi.org/10.1136/bmj.325.7375.1278 (Published 30 November 2002) Cite this as: BMJ 2002;325:1278
  1. Martin N Marshall, professor of general practice (martin.marshall{at}man.ac.uk)a,
  2. Julia Hiscock, senior researcherb,
  3. Bonnie Sibbald, professor of health services researcha
  1. a National Primary Care Research and Development Centre, University of Manchester, Manchester M13 9PL,
  2. b National Centre for Social Research, London WC1V 0AX
  1. Correspondence to: M N Marshall
  • Accepted 27 August 2002

Abstract

Objectives: To examine the attitudes of service users, general practitioners, and clinical governance leads based in primary care trusts to the public dissemination of comparative reports on quality of care in general practice, to guide the policy andpractice of public disclosure of information in primary care.

Design: Qualitative focus group study using mock quality report cards as prompts for discussion.

Setting: 12 focus groups held in an urban area in north west England and a semirural area in the south of England.

Participants: 35 service users, 24 general practitioners, and 18 clinical governance leads.

Results:There was general support for the principle of publishing comparative information, but all three stakeholder groups expressed concerns about the practicalimplications. Attitudes were strongly influenced by experience of comparative reports from other sectors—for example, school league tables. Service users distrusted what they saw as the political motivation driving the initiative, expressed a desire to “protect” their practices from political and managerial interference, and were uneasy about practices being encouraged to compete against each other. General practitioners focused on the unfairness of drawing comparisons from current data and the risks of “gaming” the results. Clinical governance leads thought that public disclosure would damage their developmental approach to implementing clinical governance. The initial negative response to the quality reports seemed to diminish on reflection.

Conclusions: Despite support for the principle of greater openness, the planned publication of information about quality of care in general practice is likely to face considerable opposition, not only from professional groups but also from the public. A greater understanding of the practical implications of public reporting is required before the potential benefits can be realised.

What is already known on this topic

Disclosure of information about quality of care in the NHS has been strongly influenced by the report card movement in the United States

This was based largely on hospital data, with no evidence to determine the attitudes of the British public to the publication of quality reports in general practice

What this study adds

The public and health professionals are in favour in principle of publishing information about quality in general practice but are concerned about the consequences for themselves, the practices, and the health system

People regard public disclosure as a political initiative and are more inclined to trust their own experience or that of friends and family than to trust comparative data

General practitioners perceive comparative reports as a burden, and clinical governance leads are concerned that the reports might damage their facilitative approach to improving quality

Introduction

The dissemination of reports comparing the quality of care provided by healthcare institutions and individual professionals represents an international trend and a central component of UK government plans for the reform of the NHS.1 These so called report cards are expected to improve the accountability of service providers, stimulate improvements in quality, and encourage service users and purchasers to access high quality providers.2 Alongside these potential benefits are well recognised risks; a tendency for organisations to concentrate their efforts on the reported outcomes, a preoccupation with brief reporting cycles at the expense of long term strategic planning, and the potential for misrepresenting or even falsifying data. 3 4

If the benefits of producing and disseminating comparative quality reports are to outweigh the risks, then the report cards will need to be adopted and used by some or all of the key stakeholders—health professionals, managers, and service users. Current evidence—most of which is derived from evaluating hospital report cards in the United States—shows that provider organisations are sensitive and responsive to report cards, whereas individual doctors tend to dislike and ignore them.5 Most American consumers tend not to value or make use of comparative data, although there is some interest from relatively young and well educated members of the public, which seems to be poorly sustained.58

As in the United States, most published information in the United Kingdom has reported on the performance of hospitals, 9 10 and the periodic release of comparative information about mortality in patients having cardiac surgery, postoperative complications, outpatient waiting times, and hospital cleanliness is becoming accepted practice. Report cards on primary care are both an inevitable next step and an explicit government policy. 1 11 Compared with hospital report cards comparative reports on general practice services present some unique challenges (see box). However, current UK government policy is influenced noticeably by what has happened in the United States, and evidence to guide the policy and practice of reporting on primary care in the United Kingdom is lacking.

Differences in public reporting

Hospital sector
  • Reports on services used by small proportion of population

  • Hospitals have a high profile in their locality and in the NHS

  • Routine data of reasonable quality readily available for reporting

  • Distinct outcomes of care measurable, common, and immediate

General practice sector
  • Reports on services used by most of the population

  • General practices have a lower profile in their locality and in the NHS

  • Little routine data available and data are of questionable quality

  • Outcomes of care often less amenable to measurement

We examined the attitudes of the key stakeholders—service users, general practitioners, and quality improvement clinical managers based in primary care trusts—to the public dissemination of comparative information on general practice performance and compared this with evidence from the United States. We use these findings to make recommendations to guide future initiatives on reporting.

Methods

Because we aimed to explore a new issue, we chose to use focus groups to encourage interaction and exchange of ideas between participants.12 We conducted a total of 12 groups; four of service users, four of general practitioners, and four of clinical managers based in primary care trusts, the so called clinical governance leads. For each stakeholder group, half of the groups were held in the north west of England, centred on a high density urban area, and half were held in a rural or semirural locality on the south coast of England.

The participants were selected using a purposeful sampling frame reflecting a broad range of personal, geographical, and organisational characteristics (table). The service users were recruited by a specialist agency using a household survey, and the general practitioners and clinical governance leads were obtained from the databases held by the local primary care trusts. The focus groups were led by an experienced moderator (JH, a social scientist who was previously unknown to the participants) and guided by a semistructured schedule derived from current knowledge about public disclosure. The participants were informed that the study aimed to describe and understand what they thought about the public dissemination of comparative information about the quality of general practice services. Initial discussion was broad, exploring general views about the provision of comparative information in non-health sectors as well as health sectors. Following this general discussion, a mock report card was presented to the participants to stimulate and focus discussion (see bmj.com). This report card was derived from the relevant literature and from examples of report cards used in the United States2 and comprised data derived from eight fictitious general practices for a range of quality criteria in three categories—patients' experience, clinical care processes, and practice organisation. The participants were encouraged to criticise and adapt the content and presentation of the data.

Characteristics of participants in focus groups. Values are numbers of participants

View this table:

The focus groups were held between February and May 2001 in local hotels, each meeting lasting around 2 hours. The service users were given a nominal fee for their participation, and the expenses of the general practitioners and clinical governance leads were reimbursed. The results of the earlier groups were fed into the later groups, and three of the early groups were reconvened to encourage the participants to reflect on and to develop their own views about the issues discussed. The discussion was audiotaped, fully transcribed, and analysed using a computer assisted method (“framework”) that facilitates both thematic analysis and case by case analysis and tracks both individual and group comments.13 The key topics and issues were identified by repeated reading of the transcripts, and the emerging themes were explored and developed in an iterative fashion by the research team. The trustworthiness of the analysis was assessed by triangulation within and between participants and groups and by exploring any differences in data interpretation between the researchers.

Results

Although the study explored a new issue, particularly for the service users, all groups engaged readily with the topic, and the discussion was lively and often heated. All of the participants were familiar with the concept of comparative performance reports and made frequent references to school examination league tables and hospital league tables. Experiences of using these reports influenced the participants' response to the public release of information on general practice.

Four major themes emerged from the data: a difference between the initial reaction and the considered response to the report cards, the usefulness of the data to the key stakeholders, immediate concerns about the principle and practice of report cards, and the wider implications of disseminating comparative information.

Initial versus considered response

The initial reaction both to the idea of performance reports in general practice and to the mock report cards was strongly negative. The dominant feeling, expressed particularly strongly by the service users and general practitioners, was that such reports were unnecessary, unfair, and unwanted. Many of the service users failed to engage with the principle behind comparative reports, doubting that there was important variation in the quality of care provided by different practices. Those who did accept this, thought that it was the result of factors outside the control of the practices themselves.

In contrast, analysis of the reconvened groups and the developing views within each group showed that the initial negative response changed over time and that the considered response from all three groups, particularly the service users, was more positive. It seemed that the initial response was based on concerns about the practical problems and consequences of disclosure:

I've got nothing against it in principle. It's purely the practical outcome, the practical consequences of it. The way the press will use it. The way the government will use it. All to fulfil their personal agenda … They will use the information as suits them best and the welfare of the health service will not matter one iota. (General practitioner, male, large semirural practice, 20 years' experience)

The considered response, however, was based on matters of principle—that data on performance are important and useful to service providers, that if information is known then it is only right that it should be in the public domain, and that if it is made public then it would be inevitable and useful for it to be presented in such a way that allows meaningful comparisons between organisations.

Usefulness of data to service users

Most of the service users dismissed the idea of using report cards to select the “best” practices. For some of them this represented a preference for geographical convenience, for some a perception that they were not encouraged to exercise choice, and for others a view that they did not want to behave in a consumerist fashion as far as health care was concerned. As one participant stated:

You don't change doctors like you change cars. (Service user, male, 41-60 years, rural area)

In general, however, the unwillingness to exercise choice related to the level of confidence that they had in the comparative information. Even if the data suggested that their own practice or doctor was substandard, they placed greater trust in their own experience or that of friends and family:

If I saw my own doctor being slagged off in the Good Doc Guide, I'd still go to him because personally he suits me and I've got faith in him, because I would know from my own personal experience.” (Service user, female, over 60 years, rural area)

The data were given credence by service users in only two situations; when the results confirmed established views about performance and when informal sources of information were absent, such as when patients moved into a new area.

A minority of service users thought that they would want to act on the information if their general practitioner was shown to be performing badly. Some stated that they would quietly change practice without making a fuss. Those who had been registered with their general practitioner for a long period said that they would want to address the issue with the doctor personally. However, they did not doubt that their general practitioner would have an acceptable explanation for the results, and they would rate this more highly than the data. Some of the general practitioner participants stated that they would resent the time required to justify their reported performance to their patients.

Immediate concerns about principles and practice of public reporting

The immediate concerns about report cards focused on the perception of a political motivation behind reporting, the issues of data quality, and the impact on professional morale and behaviour.

Cynical views were expressed by all of the stakeholder groups, particularly the general practitioners, about the politicians' desire to exert control over doctors, to get them to focus on the narrow areas of practice in the reports, and to use the data to serve political ends:

These are measurable things, and it can go into their [the government's] manifesto. (General practitioner, male, medium sized urban practice, 21 years' experience)

I'm very sceptical of figures and things like that, percentages, they can make them do what they want. They can manipulate them, they can doctor anything, can't they? (Service user, male, 41-60 years, rural area, 25 years with same general practitioner)

I suspect that it is a way of undermining the status of doctors in the eyes of their patients. (General practitioner, male, large rural practice, 20 years' experience)

The service users expressed a strong desire to protect their general practitioners from this political interference. Many of the general practitioners and service users thought that report cards were an abrogation of responsibility on the part of government for the performance of the NHS, an attempt to shift the responsibility for performance from the government to the providers. Service users were particularly concerned that report cards would herald competition between practices:

You're trying to get them going against each other, aren't you? It's like competing, isn't it? (Service user, female, 18-40 years, urban area, 11 years with same general practitioner)

They did not think this desirable, and they were concerned that the “winners” would be those who were able to “play the game,” rather than those with genuine good performance. Several service users stated that they would prefer to belong to, and general practitioners stated that they would prefer to work in, practices in the middle of a league table, rather than those at the top; they were suspicious of high performers and assumed that they must be cheating in some way.

Concerns about data quality in general practice were expressed by the general practitioners and clinical governance leads. These included a lack of routinely available data, the questionable reliability and validity of what was available, the accuracy of what was reported, and the inevitable tendency to “game” the data. In addition, the general practitioners and clinical governance leads expressed doubts that the most important aspects of general practice were amenable to measurement and reporting:

Something that's measurable may not be worth measuring, and maybe you can't measure the things that are worth measuring. What damage do you do by releasing information just because you can measure it? (Clinical governance lead, male, general practitioner background, rural area)

General practitioners and clinical governance leads in particular were concerned about the impact of public disclosure on stress, morale, and job satisfaction of general practitioners. They saw report cards as another burden at a time of major stress for doctors. The clinical governance leads thought that preserving job satisfaction among general practitioners was important and that report cards would not make general practitioners work any harder:

I don't think that being publicly released or not publicly released is going to make much difference … I don't feel I want to be a good doctor because if I'm a bad doctor the newspapers are going to report me, or that someone else is going to have an opinion on me—I want to be a good doctor so that I feel my patients are getting reasonable care, and if I do something wrong I feel very bad about it. (Clinical governance lead, male, general practitioner background, rural area)

All three stakeholder groups expressed concern that general practitioners would distort their behaviour to improve their reported performance. Service users focused on the risk of general practitioners preferentially registering patients who made their figures look good, whereas the clinicians admitted that report cards might change clinical behaviour. When discussing the “gaming” of data, some general practitioners seemed to differentiate between requests for “ridiculous” information, which they would have no compunction to game, and “sensible” data, which they would take more seriously.

The clinical governance leads supported the use of comparative information for internal purposes. They did, however, express concern that the public release of the information would encourage a “name and shame” culture in general practice and that this would run counter to their developmental and supportive approach to implementing clinical governance:

We'll get cover-ups, we'll get further entrenched in our blame culture and away from the culture where we can say “actually, I made a complete cock-up of that.” We're trying to get to a stage where that can be discussed openly, but if we have to put all (this) stuff into the public domain, we won't. (Clinical governance lead, male, general practitioner background, urban area)

Wider implications of comparative reports

All three stakeholder groups considered the wider implications for the NHS of comparative performance reports. Even though most service users doubted that they would change practice themselves on the basis of the information, they expressed concern that others would do so and that this would result in the “good” practices being swamped, to the detriment of those who were already registered with the practice. Both service users and general practitioners feared that performance reports would exacerbate inequalities because better educated and more articulate patients would use the information to select high performing practices, whereas the less educated and more vulnerable patients would be left with “ghetto” practices.

Several general practitioners and some service users expressed concern about the impact of the publication of comparative information on the relationship between patients and their doctors. They were worried that the data might undermine the patient's confidence and lead them to question past diagnoses and treatments. Some service users feared that they might be put under increased pressure to comply with advice relating to measured performance.

Discussion

A major policy commitment is to produce and disseminate comparative quality reports in the NHS. We found that although all of the key stakeholder groups—service users, general practitioners, and clinical governance leads—share this commitment in principle, there are considerable concerns about the practical processes and consequences of implementing this initiative in general practice. This opposition represents more than just a disinterest—antagonism and mistrust came across from most of the stakeholder groups.

It is unclear whether this opposition will be sustained or whether it is just a question of time before all stakeholder groups engage in the process. It is perhaps inappropriate to expect members of the public in the United Kingdom, so long deprived of information about the performance of the health service, to suddenly behave like rational consumers, weighing up the costs and benefits, making judgments about relative performance, and refusing to access apparently poor practices. 14 15 It is possible that the better informed and more empowered citizens of the future will make greater demands for information. However, a major number of people might always view objective data as less relevant and less meaningful than informal sources of information. Some authorities suggest that the rational model of decision making, on which the economic expectations of report cards are based, is fundamentally flawed and that alternative models that recognise the complex beliefs and experiences of individual patients are more useful.16 Nevertheless, there is evidence that both public and professional views of comparative reports become more positive over time.6 This might in part explain the apparent contradictions expressed in this study between the strong negative views of the focus group participants and yet the expectation that others would make use of the information.

Public attitudes to the dissemination of comparative information about performance have received little attention in the United Kingdom. The only example that we could find was an evaluation of the Scottish hospital outcomes reporting initiative.17 In this study local health councils, which were used as a proxy for public opinion, showed little interest in the data. In the United States the public are more positive about the provision of information, although they seem to make little practical use of it.5 6 18 This has been explained by the quality and timeliness of the information provided, although this study suggests that there might be more fundamental explanations relating to the relationship between government, the public, and professionals.19 One of the differences between public reporting in the United States and current initiatives in the United Kingdom is the source of the reports. Early reporting systems in the United States, led by the federal government, engendered a similar abreaction from the key stakeholders,20 whereas more recent initiatives representing coalitions of interest groups have been better received.21 It is therefore possible that non-governmental initiatives in the United Kingdom, such as the reports produced by the Dr Foster group10 and the planned release of comparative data by the Office for Health Care Information of the Commission for Health Improvement, might be seen in a more positive light than initiatives led by the Department of Health.

The willingness of both the professional and the lay participants to consider the wider implications of comparative reports could be interpreted as showing a high level of responsibility for the health service and an unwillingness to destabilise the system by refusing to access organisations that are apparently performing poorly. If this is the case then report cards in the United Kingdom are being introduced in a different context from that in the United States, where the public are viewed as demanding consumers of a service industry.22 This interpretation implies that current expectations of report cards in the United Kingdom should focus more on their potential to improve the accountability and quality of the service and less on consumer empowerment.

Our findings are limited by the chosen methodology and should be interpreted within the context of the current environment in the NHS. The extent to which the wider population holds the views expressed by the participants is unknown. Much discussion on health policy in the United Kingdom is influenced by what is happening in the United States and predicated on the assumption that a consumerist approach to health care will drive quality improvement. However, we found little support for this view among service users themselves. In addition, an examination of attitudes to a future initiative inevitably requires a degree of speculation and it is possible that the attitudes would have been different if the views were based on real experiences of using report cards. However, preliminary work of this kind is important in identifying the barriers to change, and its omission before the introduction of reporting systems in other countries and before hospital reports in the United Kingdom has led to important problems.23

We found that the implementation of public reporting in general practice will be fraught with challenges. The findings should not, and will not, derail an initiative that has the potential to improve accountability and stimulate improvements in quality. However, the technical barriers, the antipathy of the general public, the impact on professional morale, and the opportunity costs of focusing on public reporting at the expense of other health service reforms should not be discounted. It is important that policy makers, managers, and health professionals understand these barriers, recognise the limitations of directly transferring the experience of public reporting in the United States, and ensure that the implementation of public reporting in the United Kingdom is guided by relevant evidence.

Acknowledgments

Contributors: MM devised the study in conjunction with BS. JH organised and facilitated the focus groups with assistance from MM for four of the groups. JH led the data analysis, with support from MM and BS. MM wrote the first draft of this paper and all authors contributed to subsequent drafts. MM will act as guarantor for the paper.

Footnotes

  • Funding This study was funded by the UK Department of Health through core support for the National Primary Care Research and Development Centre, University of Manchester. The views expressed in the paper represent those of the authors and not necessarily those of the funding body.

  • Conflict of interest None declared.

  • Embedded Image The mock report card appears on bmj.com

References