Intended for healthcare professionals

CCBYNC Open access
Research Methods & Reporting

Designing and undertaking randomised implementation trials: guide for researchers

BMJ 2021; 372 doi: https://doi.org/10.1136/bmj.m3721 (Published 18 January 2021) Cite this as: BMJ 2021;372:m3721
  1. Luke Wolfenden, associate professor1 2,
  2. Robbie Foy, professor3,
  3. Justin Presseau, associate professor4 5,
  4. Jeremy M Grimshaw, senior scientist and professor4 6,
  5. Noah M Ivers, associate professor7 8 9 10,
  6. Byron J Powell, assistant professor11,
  7. Monica Taljaard, senior scientist4 5,
  8. John Wiggers, professor1 2,
  9. Rachel Sutherland, programme manager1 2,
  10. Nicole Nathan, programme manager2,
  11. Christopher M Williams, research fellow1 2 12,
  12. Melanie Kingsland, programme manager1 2,
  13. Andrew Milat, director of evidence and evaluation12,
  14. Rebecca K Hodder, research fellow1 2,
  15. Sze Lin Yoong, associate professor13
  1. 1School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Callaghan, NSW, Australia
  2. 2Hunter New England Population Health, Locked Bag 10, Wallsend, NSW 2287, Australia
  3. 3Leeds Institute of Health Sciences, University of Leeds, Leeds, UK
  4. 4Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
  5. 5School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
  6. 6Department of Medicine, University of Ottawa, Ottawa, ON, Canada
  7. 7Women’s College Research Institute, Women’s College Hospital, Toronto, ON, Canada
  8. 8Institute for Health Systems Solutions and Virtual Care, Women’s College Hospital, Toronto, ON, Canada
  9. 9Department of Family Medicine and Community Medicine, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
  10. 10Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
  11. 11Brown School and School of Medicine, Washington University in St Louis, St Louis, MI, USA
  12. 12School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
  13. 13Swinburne University of Technology, School of Health Sciences, Faculty Health, Arts and Design, Hawthorn, VIC, Australia
  1. Correspondence to: L Wolfenden Luke.Wolfenden{at}hnehealth.nsw.gov.au
  • Accepted 10 August 2020

Implementation science is the study of methods to promote the systematic uptake of evidence based interventions into practice and policy to improve health. Despite the need for high quality evidence from implementation research, randomised trials of implementation strategies often have serious limitations. These limitations include high risks of bias, limited use of theory, a lack of standard terminology to describe implementation strategies, narrowly focused implementation outcomes, and poor reporting. This paper aims to improve the evidence base in implementation science by providing guidance on the development, conduct, and reporting of randomised trials of implementation strategies. Established randomised trial methods from seminal texts and recent developments in implementation science were consolidated by an international group of researchers, health policy makers, and practitioners. This article provides guidance on the key components of randomised trials of implementation strategies, including articulation of trial aims, trial recruitment and retention strategies, randomised design selection, use of implementation science theory and frameworks, measures, sample size calculations, ethical review, and trial reporting. It also focuses on topics requiring special consideration or adaptation for implementation trials. We propose this guide as a resource for researchers, healthcare and public health policy makers or practitioners, research funders, and journal editors with the goal of advancing rigorous conduct and reporting of randomised trials of implementation strategies.

Investments in health research are not fully realised because of delayed and variable uptake of effective interventions by health systems and professionals.123 Implementation science seeks to resolve this problem by generating evidence to facilitate the use and integration of evidence based interventions into health policy and practice.4 Just as well conducted randomised clinical trials can provide robust estimates of the effects of medical and surgical treatments, well conducted randomised trials of implementation strategies (which we refer to as implementation trials) can provide robust assessments of the effects of implementation strategies. These strategies include audit and feedback, training, or reminders, on measures of the uptake and integration of evidence based interventions in healthcare and public health practice.5

Although randomised trials are central to evidence based medicine6 and are a common evaluation design in the field of implementation science,7 concerns have been raised about the quality of implementation trials. Criticisms include high risks of bias, limited use of theory, a lack of standardised terminology to describe implementation strategies, limited measures, and poor reporting.7891011 Progress in the field, however, has been rapid with recent advances in implementation science theory, concepts, terminology, measures, and reporting standards to resolve many of these limitations.121314

This article draws on recent developments in implementation science with established randomised trial methods to provide a best practice guide to improve the development, conduct, and reporting of randomised implementation trials. This guidance was authored by an international interdisciplinary group with expertise spanning implementation science, health services research, behavioural science, public health, trial methods, biostatistics, and health policy and practice. It discusses application of randomised trial methods in the context of large scale trials of implementation strategies, focusing on aspects that might be unique to implementation studies. Table 1 defines key implementation terms used in the guide.

Table 1

Definitions of key terms in implementation science

View this table:

Summary points

  • Criticisms of current implementation trials include risks of bias, lack of theory use, lack of standardised terminology to describe implementation strategies, and limited measures and poor reporting

  • This article consolidates recent methodological developments in implementation science with established guidance from seminal texts of randomised trial methods to provide best practice guidance to improve the development and conduct of randomised implementation trials

  • Consideration of such guidance will improve the quality and use of randomised implementation trials for healthcare and public health improvement

Recommendations for the development, conduct, and reporting of randomised implementation trials

When is an implementation trial warranted?

Implementation trials generate scientific knowledge to improve the uptake of evidence based interventions in practice. Researchers should consider several factors when deciding whether a trial of implementation strategies is needed,19 primarily the following:

  • A healthcare or public health intervention that is supported by evidence as effective (ideally by a systematic review of trials);

  • A known evidence-practice gap—that is, verification that the evidence based intervention is not routinely implemented in practice;1920 and

  • Equipoise regarding the effects of an implementation strategy.

The need for a trial and the trial methods used should also be guided by the needs, values, and input of end users and other stakeholder groups. A range of guidance documents are available to identify appropriate groups to engage and undertake meaningful research co-design across all phases of trial design, conduct, and dissemination.212223 Key features of successful co-design include clearly articulated roles and responsibilities in the process, research training to end users, clear communication pathways, and frequent interactions between researchers and end users.24

Statement of the implementation trial aim

Randomised implementation trials should have precisely stated aims, defining the population, intervention, comparison, and outcome under investigation. They should also distinguish clearly between the aims of the implementation strategy and the therapeutic intent of the targeted evidence based intervention.12 For example: “The study aimed to assess the effectiveness of audit and feedback (implementation strategy), relative to usual practice (implementation comparison) for improving clinician (implementation population) provision (implementation outcome, and target of the implementation strategy) of nicotine replacement therapy (clinical intervention) to inpatients of a cardiac ward to support smoking cessation (therapeutic intent of the clinical intervention).”

Randomised implementation trials can assess the effect of a given strategy on implementation outcomes alone, or assess both the effectiveness of the intervention on clinical or population health therapeutic outcomes as well as the effect of the implementation strategy on implementation outcomes.25 Trials with a dual focus are known as effectiveness-implementation hybrid trials (table 2). Type I effectiveness-implementation hybrid designs aim to evaluate the effects of an evidence based intervention and describe or better understand the context for implementation, but do not test an implementation strategy.25 Type II and III hybrid trials test implementation strategies on implementation outcomes.25 Although hybrid designs are suggested to be an efficient means of accumulating evidence to inform implementation, the contribution of type I and II trials to this end could be limited. This limitation could be the case when research design considerations to preserve the robust assessment of clinical effectiveness questions are prioritised over those considerations to assess the effect of an implementation strategy (on implementation outcomes).

Table 2

Typical characteristics of conventional clinical or public health trials, effectiveness-implementation hybrid trials, and implementation trials. Adapted from Curran et al, 2012, with permission25

View this table:

As an example, a type II hybrid trial could express dual aims as follows: “The primary aims of the study were to: i) assess the effectiveness of audit and feedback (implementation strategy), relative to usual practice (implementation comparison) for improving clinician (implementation population) provision (implementation outcome, and target of the implementation strategy) of nicotine replacement therapy (clinical intervention); and ii) to assess the effectiveness of nicotine replacement therapy (clinical intervention), relative to usual care, in improving smoking cessation (therapeutic outcome and therapeutic intent of the clinical intervention) among cardiac inpatients (therapeutic population).”

Recruitment and retention

Implementation trials usually recruit and randomise staff or organisations rather than individual patients. Intervention effects on clinical practice are often assessed using routinely collected, anonymised data. Therefore, implementation trials can be conducted at relatively low cost, with potentially more complete trial data than those from clinical trials that require intensive recruitment and follow-up of patients.2627 Nonetheless, effective recruitment and retention approaches are needed to ensure that all participant groups (patients, clinicians, health services) are broadly representative of the populations for which the findings are intended to generalise. Minimising barriers to participation is therefore critical to maximise external validity. Consent procedures for participants to opt out could be appropriate in some circumstances and can result in high levels of participation,28 recruitment of more typical participants groups, and more generalisable effects.293031 Opt out consent was recently used, for example, in a randomised trial of mail-outs and phone calls to improve adherence to secondary preventive treatment after myocardial infarction that used administrative data for outcome assessment.32

For research using active consent procedures, recruitment and retention strategies recommended for patients in clinical trials (such as dedicated recruitment coordinators) and reminders for non-responders also apply to the recruitment of patient groups in implementation trials. Researchers can also leverage the networks of relevant professional associations or governing health authorities,3334 engage potential trial sites in the design of the study and its recruitment and retention strategies to minimise the potential burden of participation, ensure acceptability, and facilitate the recruitment of health organisations and clinicians. Because implementation trials aim to promote evidence based practice, they could be more attractive to clinicians and organisations than other types of research, particularly when stepped wedge or delayed control group designs are used as all sites receive implementation support as part of, or immediately following, follow-up data collection.

Underlying trial philosophy: pragmatic and explanatory trials

Explanatory trials use methods that prioritise internal validity, and are undertaken in more ideal research conditions.35 Pragmatic trials emphasise external validity using methods more closely aligned to real world contexts.35 Explanatory trials focus on questions asking whether the intervention (or implementation strategy) “can” work. Implementation trials are inherently pragmatic because they usually focus on whether an intervention (or implementation strategy) “does” work when delivered in routine clinical or public health contexts.35 As such, the effect sizes of interventions tested in pragmatic trials are typically smaller than those reported in explanatory trials.3637

The pragmatic explanatory continuum indicator summary tool (PRECIS-2) describes the methodological characteristics of explanatory and pragmatic trials and can help researchers undertaking implementation trials to make design decisions consistent with the intended purpose and pragmatic nature of implementation trials.38 The tool requires users to consider trial eligibility criteria, recruitment methods, setting, the expertise and resources required for intervention implementation, the degree of flexibility in the implementation and adherence to the intervention, follow-up procedures, the selection of relevant primary outcome measures, and analysis. Furthermore, pragmatic trials might require departures from conventional safety and integrity monitoring processes, which have been largely designed for explanatory studies. Simon et al offer some guidance of adaptations that could be appropriate across each of the key participant safety and trial integrity obligations.39

Research trial design considerations

Non-randomised study designs are often used in implementation research on the basis that they might be more appropriate or feasible than a randomised controlled trial. However, these designs could report misleading estimates of effect even when experimental groups appear similar on important prognostic factors, and when such factors are considered in analyses.40 Randomised trials have also been suggested to be unnecessary in instances when extreme effects are anticipated, for example, when relative risks are less than 0.25 or greater than 4.41 However such effect sizes are rarely reported in implementation trials. Because the process of random assignment of an adequate number of units can effectively eliminate the risk of confounding, randomised trials provide the most robust evidence of the effects of implementation strategies. Further, with improving access and opportunity to use existing routinely collected data such as registries and electronic medical records, such designs are increasingly feasible.4142

Nonetheless, randomised trials require interventions that can feasibly be assigned at random. Examination of the impact of national level legislative or regulatory changes on professional practice, for example, are unlikely to be amenable to evaluation using randomised designs. Complex, adaptive systems based strategies, and those developed using complexity theory, have been tested as part of randomised implementation trials,4344 but there are many challenges to doing so, particularly for interventions in open systems without clearly defined boundaries.45 Randomised trials of such strategies may include mixed method research approaches, in-depth case studies, and ethnographic narratives to better understand system interconnectedness, interactions, and impact.45 The development of evaluation methods of these types of interventions has been identified as a priority, and are beginning to emerge.4647

A variety of randomised trial designs can be used in implementation trials (table 3). Researchers undertaking implementation trials should be aware of the relative merits of different randomised designs to inform appropriate design selection.5556 A thorough description of randomised trial design limitations (and strengths) is provided elsewhere and summarised in supplementary file 1.5557 Here, we discuss the level of randomisation considerations, and describe randomised trial designs that can be applied to assess the effects of implementation strategies.

Table 3

Description and key considerations of randomised designs for assessing the effects of implementation interventions

View this table:

Level of randomisation

In an individually randomised trial, individual participants (that is, patients)55 are randomised to one of two or more parallel groups, and outcomes (eg, clinical effectiveness) are measured at the same level as the unit of randomisation (patient). Such trials are relatively uncommon in implementation research given that interventions often operate at multiple levels and involve changes to health systems. Most implementation trials using random assignment, therefore, use cluster randomised designs (also called group randomised designs).7 In these designs, clusters such as hospitals or clinicians are randomised to receive support to implement an evidence based intervention (an implementation strategy) or a comparison condition, but where implementation outcome data can be collected from multiple individuals (that is, patients) within each cluster.55 Such outcome data are usually correlated, and this clustering must be accounted for in the design and analysis to obtain valid statistical inferences.58

Many levels of clustering are possible in implementation trials: for example, patients can be clustered within clinicians, who could themselves be clustered within a hospital, and hospitals could be clustered within a healthcare organisation. The unit of randomisation should be carefully chosen to reflect the trial aims, and should consider trade-offs between randomising at a higher level to prevent contamination versus randomising at a lower level to increase the number of units available for randomisation. Contamination likely occurs even in cluster randomised trial designs where individual clinicians within a hospital are allocated to implementation training and support, and then pass on such implementation resources or knowledge to clinicians in the same hospital allocated to a control condition. In such cases, randomising at the level of the hospital or organisation rather than the clinician can help mitigate this risk. On the other hand, if the contamination is not substantial, randomising at a lower level might be preferable, from a statistical efficiency perspective.59 The higher the level of randomisation, the fewer groups (eg, clinics, hospital) may be available to be randomised.

Parallel, two arm, randomised trial

Parallel, two arm, randomised implementation trials compare the effects of an implementation strategy with those of a control or alternative implementation strategy. Conduct of two arm trials is useful when the effects of one implementation strategy are primarily of interest. These trials are more feasible than multi-arm trials and are the most common randomised design used to assess the effects of implementation strategies.6061

Multi-arm randomised trials

Multi-arm randomised trials provide information about the comparative effects of multiple implementation approaches. They represent a more efficient method of testing the effects of implementation strategies than performing sequential two arm trials.49 For example, including three arms in a randomised implementation trial could enable the comparison of two implementation strategies with each other as well as a comparison condition. In randomised factorial designs, participants (or clusters) are randomised into groups comprised of combinations of the experimental conditions. Researchers interested in testing the effects of implementation strategy A as well as those of implementation strategy B within the same trial, for example, might randomise participants into four groups: A alone, B alone, both A and B, and neither A nor B.55 Such designs enable exploration of interactions between groups, and the effects of implementation strategies separately and in combination. Fractional factorial randomised trials include larger numbers of strategies, however, and allocate participants to selected (rather than all) strategy combinations, eliminating comparisons that are of no interest to reduce the potential sample size requirements of the trial.6263

When an intervention must, for practical, logistical, or organisational reasons, be rolled out to all units in a health system, a stepped wedge design might be useful. In stepped wedge randomised trials,5764 all units such as hospitals (clusters) are first recruited, then randomised to receive the implementation intervention at regular intervals (or steps) sequentially over time, until all units have been exposed to the intervention.6566 Trial outcome data are collected at regular intervals throughout the trial, with each unit providing data for both experimental and control conditions (periods). Under some circumstances, the design might require fewer units to participate than parallel arm, cluster randomised trials, particularly when the intraclass correlation is high and cluster period sizes are large. Stepped wedge trials require repeated assessment of outcomes across the trial periods, making these designs most suited for outcomes that can be assessed using routinely collected data. Such designs are increasingly being used in health services and implementation research, although they are vulnerable to increased risks of bias and other complexities that could make them less attractive than parallel arm designs.646567

Sequential trial designs

Sequential multiple assignment randomised trials (SMART) are a type of adaptive design used to inform the development of adaptive implementation strategies (or interventions).5368 In an adaptive implementation strategy, the dose, type, or delivery of strategies is modified across several stages based on prespecified decision rules, providing individualised approaches to better meet the specific needs and evolving status of participants. With this design, participants are randomised to different implementation strategy options at each stage.68 For example, clinicians who do not improve implementation of an intervention following the provision of an initial package of implementation strategies could receive different or more intensive implementation support subsequently than clinicians who do improve implementation. The design allows researchers to assess the effect of adaptive approaches and the isolation of the effects of specific strategy modifications. Such designs involve complex statistical considerations.

Hybrid trials

Hybrid trials can use any type of randomised trial design. However, because they focus on assessing the effects of implementation strategies on both clinical effectiveness and implementation outcomes, design modification might be needed (table 3).25 Design modifications may often be required because clinical effectiveness outcomes are usually assessed at an individual level, while implementation outcomes could be assessed at a provider or organisational level. This duality of purpose of hybrid trialscan result in research designs to assess outcomes at one level being nested within a design determined by an outcome at another level. For example, a randomised trial of the introduction of a school nutrition policy might require 100 schools to participate to detect meaningful change in school level policy implementation (implementation outcome), but need only to assess students in a nested random sample of 20 participating schools to identify meaningful improvements in child dietary intake (effectiveness outcome).

Reducing bias in randomised implementation trials

Researchers should be aware that randomised trials are prone to threats to internal validity and seek to avoid major risks of bias.56 As implementation trials often include multiple outcomes assessed at different levels (organisation, clinician, patient), research design characteristics and risk of bias need consideration at each level. For cluster trials, baseline comparability of groups at both the cluster and individual levels can be difficult to achieve if only a small number of clusters such as hospitals are available for randomisation.6970

In many cluster implementation trials, study sites (clusters) such as clinics, might be randomised and allocated before individual (that is, patient level) recruitment. If those identifying and recruiting participants (or the potential participants themselves) are not blinded to allocation, differential recruitment and study participation can occur (selection bias).71 Selection bias is a common problem in clustered designs.72 In the UK BEAM trial, for example, primary care practices were recruited and randomised.73 Clinicians at primary care practices allocated to the experimental arm then received training in guideline based management of back pain after which patient recruitment commenced. In the study, practice nurses recruited twice as many patients among primary care practices allocated to receive training as those patients allocated to usual care, and the characteristics of patients differed between groups. Gatekeepers can also withdraw their health site (cluster) from a trial once informed of group allocation but before individual participant level recruitment.71 Such circumstances can be particularly challenging for intention-to-treat approaches to analyses of trial outcomes, because little is known about the characteristics of those individuals who would have participated in that cluster.74 Selection bias can best be avoided by allocating units after consent and baseline data collection.

In clinical trials, a lack of blinding of participants and personnel delivering an intervention in a clinical trial could increase the risk of bias,55 because knowledge of assignment to an intervention might lead to contamination, protocol deviations, or co-intervention. However, the blinding of participants and personnel is often inappropriate (and not possible) in implementation trials because they seek to assess the effect of an implementation strategy in individuals or organisations aware of the care given. A range of other strategies could reduce the risks of such biases including the use of clustered designs,75 simply asking clinicians or patients not to share information, trial intervention or implementation strategy sessions that are spatially or temporally separate, and systems to avoid transfer of patients between clinicians.76 The effectiveness of these strategies, however, is unclear. If adequately assessed, statistical approaches can also be used to adjust for contamination in analyses.777879 The Cochrane risk-of-bias tool (version 2)56 for randomised trials provides a comprehensive description of potential risks of bias for various randomised designs and strategies to help identify and reduce such risks.

Models, theories, and frameworks

The lack of explicit descriptions of the mechanism by which implementation strategies are hypothesised to exert their effects is suggested to reduce the ability to judge the generalisability of trial findings across settings and contexts, to limit understanding of implementation processes and to slow the cumulative progression of the field.80818283 As such, implementation trials should include an explicit programme theory,81 or a logic model that details the rationale and assumptions about the mechanisms linking implementation strategy (and intervention),84 processes, and inputs to trial outcomes. A programme theory can be developed using informal theory—that is, understanding of the problem and its determinants gained through experience or tacit knowledge by the developers of the intervention. However, we recommend that the use of informal theory is coupled with the formal behavioural or implementation theories or frameworks (table 4).85 Although a range of theories and frameworks exist, few are supported empirically,93 and some are known to be of little use in predicting or explaining behaviour.94 Determinant frameworks can be particularly useful in implementation strategy development because they consolidate several behavioural theories and identify a comprehensive range of multilevel factors that are theoretically (or empirically) linked with implementation outcomes. In addition to the extent to which a theory or framework is empirically supported, criteria including usability, testability, familiarity, and applicability should be considered when comparing and selecting a model, theory, or framework.95

Table 4

Description of models, theories, and frameworks used in implementation strategy design. Adapted from Nilsen, 201585

View this table:

Several useful resources are available to support the application of formal theory in the development of broader programme models and specific implementation strategies.96 French et al propose a four step process for such a development (table 5).97 Other systematic methods for developing implementation strategies also exist,99100 which typically involve four common steps: barrier identification, linking barriers to implementation strategy component selection, use of theory, and user engagement.99 Importantly, the development of programme theory and implementation strategies requires a thorough understanding of the problem, its determinants, and context in which implementation needs to occur and so should involve considerable end user engagement and formative evaluation.100

Table 5

Suggested steps for the development of a theory informed implementation strategy. Adapted from French et al, 201297

View this table:

Measures

Trial outcome measures

The selection of outcome measures should be linked directly to trial primary and secondary aims and enable the robust quantification of an effect. Proctor and colleagues proposed a taxonomy of eight conceptually distinct implementation outcomes, namely acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability.101 From a trial design perspective, the collective labelling of such measures as “outcomes,” is a misnomer that has created some confusion,102 because many of these measures do not lend themselves to the reporting of an effect size. For example, measures of the acceptability of an intervention (or implementation strategy) can only be reported in the trial group in receiving it, precluding between group comparisons. Many of these measures might be better aligned to the assessment of implementation processes and other factors influencing implementation.42102

Most implementation trials primarily focus on measuring the extent to which an implementation strategy achieved implementation of the targeted evidence based intervention (eg, a guideline) such as measures of professional practice improvement, changes in processes of care, adherence to clinical standards, or the amount or quality of programme or intervention delivery.7 As measures of such outcomes are often unique to the intervention being implemented and its context, generic standard measures are unlikely to be available. Instead, researchers might identify or develop measures that assesses their specific implementation outcome and context, for example, using data collected as part of environmental observations, routinely collected administrative records, or questionnaires. The limitations of each of these approaches need to be considered,103 but as trial outcomes, such measures should be robust, and sensitive to change. Multiple outcome measures should also be used in trials to provide a more comprehensive appraisal of the effects of an implementation strategy, acknowledging how these measures are related to each other and the inherent limitations of single measures of implementation.42103 For trials focused on assessment of individual patient level outcomes, clinical outcomes should be sufficiently proximal and arise exclusively (or mostly) from the improvements in clinical practice targeted by the implementation strategy.104 For example, in a study to improve survival from heart attack, researchers noted that even if perfect compliance with care standards in a hospital could be achieved, the anticipated changes in cardiac mortality (or survival) would be insufficient to feasibly detect in a trial.105

Process evaluation

Process evaluation provides important depth to the interpretation of trial outcomes. Qualitative and mixed method approaches can elucidate insights to better understand how and why implementation might improve (or not) following the application of an implementation strategy, and key contextual factors that might influence it. Several publications, including a white paper by the Qualitative Research in Implementation Science (QualRIS) group (an expert group convened by the National Institute of Health), provide guidance for the use of qualitative methods in implementation science, including discussion of design, data collection, and analytical methods as well as recent developments in the field.106107 While several approaches have been suggested to undertake process evaluations,108109110111 here we offer guidance consistent with the United Kingdom’s Medical Research Council, which suggests process evaluations include assessment of implementation processes, mechanism of impact, and contextual factors that shape outcomes.112

Implementation processes

Implementation processes are specific policies, practices, and strategies that are used to establish and support an intervention.101Table 6 provides a range of measures proposed by Proctor et al101 that might be useful for exploring implementation processes. Such measures, for example, could be used to describe characteristics of the evidence based intervention, or the implementation strategy (table 6). The psychometric properties of a range of existing tools that assess these have recently been reported.113114 Additionally, because evidence based interventions are often adapted by end users (such as clinicians) in the process of their implementation, the documentation, recording, and reporting of adaptations has been suggested to be important to understanding the effects of efforts to implement evidence based interventions.12 A framework by Stirman et al provides more detailed guidance of how to do so.115 The use of qualitative inquiry has also been recommended by QualRIS to assess adaptation and other implementation processes while ethnography has been suggested to be well suited to assess implementation microprocesses at the level of individual interactions.107

Table 6

Implementation measures used to establish and support evidence based interventions. Adapted from Proctor et al, 2011, with permission101

View this table:

Implementation mechanisms

The mechanism by which an implementation strategy exerts its effects is important to understand in order to identify how these effects might be replicated and improved.112 To develop such an understanding, specific analytical methods can be applied to assess casual assumptions of the pathways specified by the programme theory.116117118119 Such mechanistic evaluations require clear specification of implementation strategies, links between strategy and mechanism, identification of outcomes, and (if relevant) articulation of effect modifiers.119 Some classic theories, implementation theories, and determinants frameworks have existing measures of factors theoretically linked to implementation outcomes. Several reviews of such measures have been published,120 of which the most comprehensive is the Instrument Review Project, funded by the National Institutes of Health.13 Reviews, however, suggest that implementation mechanisms are rarely tested in trials of implementation strategies,121122 and where testing has occurred, often it is undertaken inappropriately. To best understand the multilevel and interdependence of factors that might influence implementation, sophisticated quantitative and qualitative methods are required.123124 Lewis and colleagues suggest that common quantitative approaches to mediation testing in implementation trials are suboptimal, and that the product of coefficients approach might be preferable given its capacity to examine single level and multilevel mediation and maximise power.122 Further, qualitative approaches have been suggested to be particularly useful in the absence of established quantitative measures, and structured qualitative inquiry can help deepen an understanding of mechanistic processes.107122 Contemporary guidance on mechanistic evaluation, including how it is applied in implementation science, is provided in more detail elsewhere.122

Implementation contexts

Context refers to external factors that might act as a barrier or facilitator to implementation, or influence the effects of an implementation strategy.12112 Descriptions of context, therefore, provide critical information regarding the external validity of trial findings and enable readers to assess the applicability of the findings to their own setting. Context measures can include measures of the social, political, or economic environment that might influence implementation.12 These measures include leadership, workforce capacity, readiness to change, and other organisational or patient characteristics.125 Some randomised implementation trials have also used systematic reviews of news archives, and of websites of relevant agencies to assess changes in government policy, guidelines, accreditation standards or funded programmes that might influence implementation or confound trial outcomes.126127 Quantitative or qualitative measures of context can also be assessed analytically to examine their potential role in shaping implementation processes or outcomes in the context of the broader programme theory.42

Sample size calculation

Sample size calculations estimate the number of participants required to detect the hypothesised effect of an implementation strategy with acceptable power.128129 While sample size calculations for clinical effectiveness trials are based on treatment effects identified as of sufficient magnitude to provide a clinical therapeutic benefit to a patient,129 sample size calculations for implementation trials need to consider a meaningful or worthwhile effect size for an implementation outcome from a population or system level perspective. Because implementation strategies typically seek to improve the implementation of existing evidence based interventions of known therapeutic benefit, any improvement in implementation may increase the number of patients or the community exposed to (and benefiting from) evidence based healthcare. Strategies that lead to small improvements in implementation might be meaningful from a system perspective if they can be delivered, easily, at low cost, and at a population level. Sample size calculations need to use parameters required for the type of randomised design undertaken and researchers should follow design specific advice to do so.130 Because implementation trials can have participants at multiple levels, sample size calculations are usually more complicated than those for clinical effectiveness trials, and might need to consider the relative contributions to the power of increasing the numbers of participants at each level.

Research ethics review

As implementation trials meet the definition of research (a systematic investigation designed to produce generalisable knowledge) and involve human research participants (which could include health professionals),131 ethical review by an institutional review board is required before trial commencement. Implementation trials can occur in the context of usual service improvement activities that can complicate the nature of consent for research participation.132133 Implementation trials often involve participants at multiple levels, so research ethics review is more complicated. Although no specific ethical statements exist pertaining to implementation trials,133 the Ottawa Statement on the Ethical Design and Conduct of Cluster Randomised Trials covers such issues, and has recently been applied to trials of knowledge translation interventions.134135 The statement provides guidance to help identify research participants (patients, clinicians, and managers), and lists requirements for organisational governance, assessing benefits and harms, and protecting vulnerable participants (table 7). A key consideration when submitting a protocol to a research ethics committee is identifying the human research participants in the trial.136 Research participants can be identified as any individual whose interests might be affected as a result of study interventions or data collection procedures.136 In some implementation trials, patients might not be considered research participants (that is, they do not have any study interventions directed at them, or do not have their identifiable data collected for the purposes of research). When patients are not research participants, their informed consent is not required.137 However, when employees such as clinicians are the recipients of an implementation strategy, and are involved in data collection or where identifiable data are collected about them, their consent is required. Approval might also be required from gatekeepers such as an organisational leader for such research to be undertaken in their facility.

Table 7

Selected ethical issues included in the Ottawa Statement on the Ethical Design and Conduct of Cluster Randomised Trials that are relevant to implementation trials. Adapted from Taljaard et al, 2013134

View this table:

Reporting

The Standards for Reporting Implementation Studies (StARI) guide has been designed specifically to facilitate the better reporting of implementation trials and should be used in conjunction with the CONSORT reporting guideline (and extension) specific to the type of randomised trial design used.12 Efforts to test the effectiveness of implementation strategies have been hindered by a lack of conceptual clarity owing to inconsistent definitions and insufficient detail to enable replication.9 To resolve this, StaRI recommend the use of the Template for Intervention Description and Replication (TIDieR) checklist when describing the evidence base intervention that is subject to implementation.12138 Similar recommendations have been proposed for standardising description of implementation strategies,15 and implementation researchers should describe implementation strategies using an established taxonomy (eg, the Behaviour Change Technique or Expert Recommendations for Implementation Change taxonomies).915139140 The identification of core and non-core components of the implementation strategy, based on the underlying programme theory, should also be articulated.

Conclusion

High quality randomised trials have a key role in advancing implementation science by providing robust evidence on the effects of approaches to improve the uptake and integration of evidence based practice. With the emergence of more accepted concepts, terminology, processes, and reporting standards in the field, the opportunity to improve the development, conduct, and reporting of such trials is considerable.121314 This article summarises the latest guidance on the best practice randomised trial and implementation science methods to fulfil this need for improvement. The development of guidance documents have proved a useful resource in improving the rigour of randomised controlled trials in healthcare and public health.141 This guide is also aimed at journal editors, reviewers, and funders of implementation research as a resource to improve the quality of the implementation science evidence base.

Footnotes

  • Contributors: The manuscript was the product of the collective contribution of a broad multidisciplinary team. All authors are experienced health services and public health researchers. Additionally the author team include those with expertise in implementation science (LW, RF, JP, JMG, NMI, BJP, SLY), behavioural science (JP, JW, RKH), randomised trial methods (JMG, JP, MT, NMI, RF, CMW), research ethics (MT, JMG), the application of theory (JP, BJP), biostatistics (MT) and research reporting (JMG, MT). The team also included a range of health policy makers and practitioners (RS, NN, JW, MK, AM, RKH). The guidance draws on this expertise and a range of seminal randomised trial methods texts, and recent developments in implementation science methods and conventions and standards. All authors contributed to the planning of manuscript, participated in meetings to develop content, and provided critical manuscript edits and comments on drafts. The drafting of the manuscript was led by LW. LW is the guarantor. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

  • Funding: No specific funding was received for this work. LW receives salary support from an Australian National Health and Medical Research Council (NHMRC) career development fellowship (grant APP1128348) and Heart Foundation Future Leader Fellowship (grant 101175). NMI holds a Canada Research chair (tier 2) in implementation of evidence-based practice and a clinician scholar award from the Department of Family and Community Medicine, University of Toronto, Toronto, Canada. JMG holds a Canada Research chair in health knowledge transfer and uptake and a Canadian Institutes of Health Research Foundation grant (FDN 143269). BJP was supported by the United States National Institute of Mental Health (K01MH113806). CMW was supported by the NHMRC of Australia (APP1177226). RS was supported by an NHMRC TRIP fellowship (APP1150661). RKH was supported by NHMRC early career research fellowship (APP1160419). SLY is supported by a Discovery Early Career Researcher Aware grant from the Australian Research Council (DE170100382).

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

  • Data sharing: No additional data available.

  • The lead author (LW) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

  • Patient and public involvement: Patients and the public were not involved during the process of this research.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

http://creativecommons.org/licenses/by-nc/4.0/

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

References