Intended for healthcare professionals

CCBYNC Open access
Research

Non-publication and delayed publication of randomized trials on vaccines: survey

BMJ 2014; 348 doi: https://doi.org/10.1136/bmj.g3058 (Published 16 May 2014) Cite this as: BMJ 2014;348:g3058
  1. Lamberto Manzoli, associate professor12,
  2. Maria Elena Flacco, resident physician13,
  3. Maddalena D’Addario, resident physician45,
  4. Lorenzo Capasso, PhD student12,
  5. Corrado De Vito, assistant professor6,
  6. Carolina Marzuillo, assistant professor6,
  7. Paolo Villari, professor6,
  8. John P A Ioannidis, professor78
  1. 1Department of Medicine and Aging Sciences, University of Chieti, Via dei Vestini 5 66013 Chieti, Italy
  2. 2CeSI Biotech, Via Colle dell’Ara, Chieti, Italy
  3. 3Local Health Unit of Pescara, Italy
  4. 4Division of Clinical Epidemiology and Biostatistics, Institute of Social and Preventive Medicine, University of Bern, Switzerland
  5. 5Division of International and Environmental Health, Institute of Social and Preventive Medicine, University of Bern, Switzerland
  6. 6Department of Public Health and Infectious Diseases, Sapienza University of Rome, Italy
  7. 7Stanford Prevention Research Center, Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA
  8. 8Department of Health Research and Policy, Stanford University School of Medicine, Stanford, CA, USA
  1. Correspondence to: L Manzoli lmanzoli{at}post.harvard.edu
  • Accepted 24 April 2014

Abstract

Objective To evaluate the extent of non-publication or delayed publication of registered randomized trials on vaccines, and to investigate potential determinants of delay to publication.

Design Survey.

Data sources Trials registry websites, Scopus, PubMed, Google.

Study selection Randomized controlled trials evaluating the safety or the efficacy or immunogenicity of human papillomavirus (HPV), pandemic A/H1N1 2009 influenza, and meningococcal, pneumococcal, and rotavirus vaccines that were registered in ClinicalTrials.gov, Current Controlled Trials, WHO International Clinical Trials Registry Platform, Clinical Study Register, or Indian, Australian-New Zealand, and Chinese trial registries in 2006-12. Electronic databases were searched up to February 2014 to identify published manuscripts containing trial results. These were reviewed and classified as positive, mixed, or negative. We also reviewed the results available in ClinicalTrials.gov.

Main outcome measures Publication status of trial results and time from completion to publication in peer reviewed journals.

Data synthesis Cox proportional hazards regression was used to evaluate potential predictors of publication delay.

Results We analysed 384 trials (85% sponsored by industry). Of 355 trials (404 758 participants) that were completed, 176 (n=151 379) had been published in peer reviewed journals. Another 42 trials (total sample 62 765) remained unpublished but reported results in ClinicalTrials.gov. The proportion of trials published 12, 24, 36, and 48 months after completion was 12%, 29%, 53%, and 73%, respectively. Including results posted in ClinicalTrials.gov, 48 months after study completion results were available for 82% of the trials and 90% of the participants. Delay to publication between non-industry and industry sponsored trials did not differ, but non-industry sponsored trials were 4.42-fold (P=0.008) more likely to report negative or mixed findings. Negative results were reported by only 2% of the published trials.

Conclusions Most vaccine trials are published eventually or the results posted in ClinicalTrials.gov, but delays to publication of several years are common. Actions should focus on the timely dissemination of data from vaccine trials to the public.

Introduction

Randomized controlled trials are crucial in providing reliable and timely information about the effectiveness and safety of all healthcare interventions.1 In the case of emerging pandemics with modifying or even new infectious agents, such as the pandemic A/H1N1 2009 influenza virus,2 the availability of information on potential vaccines becomes even more time sensitive.3 While a time-lag in the dissemination of results may have adverse consequences for the practice of evidence based medicine and on public health for any disease, for epidemic diseases a delay in publication of relevant randomized controlled trials may distort the available evidence that is used for recommendations, allocation of resources, stockpiling of drugs and vaccines, and other public action.4 Even if trials do eventually get published years later, it may be too late, and the results may have less relevance because of the rapid changes in pathogens or vaccines. Even when pathogens or vaccines have not changed and late published trials are still relevant, the losses by having adopted suboptimal evidence over several years can still be substantial, given the wide use of many vaccines and the availability of many different formulations. It is important to have the best evidence and complete data from randomized trials to select the best vaccines and their best formulations for use in wide populations.

A growing body of evidence indicates that a relevant proportion of results from randomized trial remains unpublished, or is published after major delay.5 6 7 Although several studies have estimated the proportion of incomplete or selective reporting in various specialties, to our knowledge only two studies have focused on vaccines.8 9 A first analysis evaluated the completeness of reporting of 70 randomized controlled trials of two vaccines based on the CONSORT 2010 checklist but did not assess delay to publication.9 In the other study, we evaluated the delay to publication of randomized trials on a single vaccine against H1N1 and found that most registered and completed trials were not published in the peer reviewed literature within two years from the onset of the pandemic.8

To examine whether similar problems of non-publication and delayed publication affect randomized controlled trials on a wide variety of vaccines we updated the previous survey on H1N1 trials and expanded the analysis to several other important vaccines, including human papillomavirus (HPV), rotavirus, pneumococcal, and meningococcal vaccines. We evaluated whether current concerns of non-publication should be extended to the vaccine literature. We also investigated the potential determinants, including sponsorship, of non-publication or delayed publication.

Methods

Registered trials

We initially searched for randomized controlled trials that evaluated the efficacy (including immunogenicity) or safety in healthy humans of selected vaccines (HPV, H1N1, meningococcal, pneumococcal, and rotavirus), had been registered in at least one of several clinical trial registries (US ClinicalTrials.gov, Current Controlled Trials, WHO International Clinical Trials Registry Platform, Clinical Study Register, and Indian, Australian-New Zealand, and Chinese Clinical Trial Registries) since 1 January 2006 and up to 31 December 2012. Two investigators independently carried out the search using the search terms: “vaccine OR vaccines OR vaccination OR immunization”, and “pneumococcal” or “influenza” or “flu” or “meningococcal” or “meningococcus” or “rotavirus” or “HPV” or “papilloma virus” (all fields).

We did not include trials registered before 2006, because many of them were registered after the start of the study and some form of selection bias was to be expected (in some of these cases, trial registration even occurred retrospectively, after the decision to publish the results).10 Within the registries we excluded those trials that had been withdrawn before the start of enrollment, non-randomized trials, and duplicate registry entries. We considered trials registered after 1 January 2006 to be eligible for consideration in this analysis regardless of whether the registration date predated or antedated the reported start date of the trial. Multivariable analyses were repeated excluding all the trials that were registered after their start or only those registered after three months from the start date: given that all the main results were similar, the final analyses were based on all trials (details available from the authors).

We extracted information from the entries in the trial registry, including completion status, starting date, sample size, sponsors, and whether the results on the primary outcomes were available in the registry. For those trials that were reported as completed but the date of completion was missing (n=11), we extracted the expected duration of the study (or, if not available, the expected duration of follow-up) and conservatively added one or two years (if the expected duration was shorter or longer than one year, respectively) to the expected duration. For the five trials that were indicated as not yet completed, even after the results had been published, we imputed as being completed three months before the publication date. One trial had some results posted on the registry website although the trial was reported as uncompleted and the expected completion date was December 2014: given that a primary publication was unlikely before the formal completion, we conservatively classified the trial as uncompleted.

Six trials did not report a start date: in such instances we used the date of first enrolment if available and, if missing, the date of inclusion in the registry. For sample size we used the number of participants listed in the section “planned or actual enrollment” of the enrollment field.10

Published trials

We searched PubMed, Scopus, and regular Google by trial registration code, principal investigator, country, keywords, and title for matching manuscripts and identified registered trials that had been published. The publication records on trial registries were reviewed when available. We classified those published reports that were found from just typing the registry identification code into PubMed or Scopus as “easily retrievable.” We also searched whether the trial registry identification code was reported in published papers. When the identification code was not available in a published report, we matched the entry in the trial registry to the report only if the country, sample size (2 standard deviations), sponsor, vaccine type, and main outcomes were coincident, and dates compatible.

We considered a trial to be published if one or more of the main outcomes appeared in a peer reviewed journal, either online or in print. For trials published online ahead of print or those with results published more than once, we always extracted the earliest publication date. The last search update was on 1 February 2014, and we censored completion or publication dates of non-completed or non-published trials at that date.

Twenty four trials were still unpublished six or more years from the completion date. We defined such trials as “long since unpublished.” To obtain additional information on these trials we emailed the contact person or institution listed in the registry, or if that was not possible we made a formal request through the institution’s website form.

Two investigators evaluated and independently classified the published trial results as “positive” if the vaccine was efficacious or highly immunogenic, with no serious vaccine related adverse events, “mixed” if the results of primary outcomes were positive but those of other important outcomes were not, and “negative” if the vaccine showed an unequivocally low efficacy or immunogenicity or some serious vaccine related adverse events. We classified trials aimed at determining the optimum dose as positive if at least one of the doses or formulations showed high efficacy or immunogenicity and none reported serious adverse events. We did not use a single cut-off point for efficacy of safety outcomes to assign a negative or positive label, but we took into account the average literature values for each specific vaccine and outcome—that is, we considered a seroconversion rate of less than 50% for an influenza vaccine to be a negative value (further details are available from the authors). The inter-rater agreement between the two investigators was 87.6%. Disagreements were solved through consensus, with the exception of four trials that required the opinion of a third investigator (LM). The results of four trials were published only pooled into a large integrated database.11 In descriptive analyses, we assigned the same judgment to all four trials.

Statistical analyses

We evaluated the time from starting a trial to its publication using Kaplan-Meier analysis considering all registered trials. We also evaluated with the log-rank test whether the time to publication was different for different sponsors, and then tested with Cox proportional hazards analysis whether there was any evidence that the risk of publication depended on the sponsor, sample size (log-transformed), and type of vaccine, adjusting for the date of starting. We performed both univariate and multivariable analyses, in which we included all these covariates. In secondary analyses we evaluated the time from starting a trial until its completion and the time from completion of a trial to its publication. We used the Schoenfeld test to check the proportional hazards assumption for all models and plotted Nelson-Aalen cumulative hazards estimates. We selected covariates a priori, but none of the other extracted trial or sample characteristics was found to be significantly associated with time to publication when added to the final models.

The analyses predicting time from start or completion of a trial to publication were repeated including among the published trials those that were not published in peer reviewed literature but had primary outcomes results reported in ClinicalTrials.gov.12 For such analyses, the publication date was the date reported after the wording “results first received” for the unpublished trials; the earlier of either the publication date or the date that the results were made available on ClinicalTrials.gov for published trials.

Analyses were made in Stata 11.1 (Stata, College Station, TX, 2011). P values are two tailed.

Results

Trial characteristics and publication of trials

Overall, 384 randomized controlled trials were registered in 2006-12 for vaccines against the pandemic A/H1N1 2009 influenza virus (n=83), HPV (n=60), rotavirus (n=117), pneumococcus (n=83), and meningococcus (n=41, table 1). The figure shows the trial selection process. The two investigators agreed on 95% (n=363) of the 384 registered trials to be included, with the remaining discrepancies resolved through consensus.

Figure1

Flow diagram of trial selection process

Table 1

 Selected characteristics of sample of randomized controlled trials (RCTs), overall and by vaccine

View this table:

Most of the registered trials (n=355, 92%) had been completed. After a median of 26.4 months from completion, 176 of the 355 completed trials had been published by February 2014 (50%). Another 42 (12%) of the 355 completed trials had primary outcome results available in ClinicalTrials.gov but were not published in a peer reviewed journal. When registered results were also included, the proportion of publications varied by vaccine and ranged from 43% (rotavirus; n=15/35) to 64% (2009 H1N1; n=53/83) and from 54% (meningococcal; n=58/108) to 72% (H1N1; n=60/83). According to registry or publication data, the 355 completed trials include 404 758 planned or actual participants. Of those, the 176 published trials included data on 151 379 participants (37.4%), and the 42 unpublished trials with results in ClinicalTrials.gov included data on 62 765 participants (15.5%). After 12, 24, 36, and 48 months from completion, the proportion of published trials in Kaplan-Meier analyses was 12% (n=40/337), 29% (n=84/292), 53% (n=138/260), and 73% (n=170/233), respectively (table 1). Of the 63 trials that were not published after 48 months from completion, only six (10%) were published later, but another 20 (32%) had results posted in ClinicalTrials.gov. Including the trials with posted results in ClinicalTrials.gov, at 48 months after completion, results were estimated to be available from 82% of the trials (n=210/255) and 90% of the participants (n=208 233/232 299).

Eighteen trials were published in generalist journals with a high impact factor—Lancet (n=6), JAMA (n=5), New England Journal of Medicine (n=5), and BMJ (n=2), but most randomized controlled trials were published in specialized journals such as Vaccine (30%; n=52/176) or the Pediatric Infectious Disease Journal (19%; n=33/176). The median 2011 impact factor of the journals publishing papers was 3.78 (interquartile range 3.59-3.97, table 1). The trial registry identification code was reported in most of the papers (90%; n=159/176), and 75% of the published reports were easily retrievable through the trial registry number in Scopus or PubMed (n=132/176).

Most trials had been registered before the start date (81%; n=311/384) or within three months of the start date (13%; n=49/384), had been registered in ClinicalTrials.gov (90%; n=347/384), were restricted to children (59%; n=227/384), had positive results when published (90%; n=158/176), or had been supported by industry (85%; n=326/384, table 1).

Analysis of trial sponsors

Overall, the 58 non-industry sponsored trials enrolled only 9.7% of the total sample (n=59 141/607 076), and the proportion varied by vaccine, ranging from 2.1% (meningococcal; n=2444/118 813) to 39.2% (rotavirus; n=20 290/51 711, table 1). Five companies funded 85% (n=276/326) of the industry sponsored trials: GlaxoSmithKline (n=140), Novartis (n=46), Pfizer/Wyeth (n=39), Sanofi-Aventis (n=26), and Merck (n=25). The publication rate of the completed trials largely varied by sponsor (table 2), ranging from 24% (six of the 25 randomized controlled trials sponsored by Merck) to 61% (GlaxoSmithKline; n=81/132). Such differences were, however, not confirmed in multivariable analyses; it was 48% (n=22/46) for the studies funded by non-profit institutions versus 50% (n=154/309) for industry sponsored trials. Counting also the trials with results in ClinicalTrials.gov, the proportion of industry and non-industry sponsored trials that were published or had results available increased to 63% (n=194/309) and 52% (n=24/46), respectively (table 2). At 48 months after completion of the trials, these proportions became 82% (n=187/227) and 82% (n=23/28), respectively.

Table 2

 Selected characteristics of sample by funding source. Values are percentages (numbers) unless stated otherwise

View this table:

Trials not sponsored by industry were more likely to report negative or mixed results than industry sponsored trials (32% (7/22) v 7% (11/154), table 2). Such a difference remained significant when trials with results in ClinicalTrials.gov were also considered, and after adjusting for age-class, sample size, starting year, and vaccine type into a logistic model (odds ratio 4.42, 95% confidence interval 1.45 to 13.5, P=0.008).

Predictors of time from completion to publication, from start to completion, and from start to publication

Both univariate and multivariable (table 3) analyses showed no statistically significant difference in the time from completion to publication between trials sponsored by for profit and not for profit institutions, either excluding (table 3) or including (see supplementary table 1 on bmj.com) the trials with results reported in ClinicalTrials.gov. Multivariable analysis also confirmed that trials of the H1N1 vaccine were published faster after completion than all other trials (P<0.001; table 3, supplementary table 1 on bmj.com).

Table 3

 Potential predictors of time to publication

View this table:

Multivariable analysis showed that time to completion was significantly longer for larger trials, for trials on vaccines other than H1N1, and for trials sponsored by not for profit organizations (table 3). Time from start to publication was longer for trials of vaccines other than H1N1; no significant difference was found by sample size or sponsor type (table 3, see supplementary table 1 on bmj.com).

Long since unpublished trials

Supplementary table 2 on bmj.com shows the main characteristics of the 24 trials that remained unpublished after six years from completion. Most of those (n=14/24) tested meningococcal vaccines, only one was sponsored by not for profit institutions, and only five reported results in ClinicalTrials.gov. The 19 trials with no results at all included 11 527 participants. Despite our attempts, none of the people or institutions that we tried to contact provided any additional information on these long since unpublished trials.

Discussion

Our empirical evaluation found that after a median of 26 months from completion, about half of the registered randomized trials on five vaccines had been published, and no information was available in the peer reviewed literature for almost two thirds of the entire sample of patients who had been randomized in these vaccine trials. Including the trials with results in ClinicalTrials.gov, the proportion of published trials increased to 61% (53% of the randomized population sample). However, we observed that the results of many trials were posted or published with major delay. At four years after completion, data were available on peer reviewed journals for approximately 80% of the trial participants; or 90% when the results posted in ClinicalTrials.gov were also considered.

Comparison with other studies

Numerous articles on non-publication of trials across diverse specialties have been published in the past two decades.5 8 10 13 14 15 16 17 18 19 20 The main characteristics of 31 studies have been summarized elsewhere,5 and 10 analyses that were not included are briefly described in supplementary table 3 on bmj.com.8 10 13 14 15 16 17 18 19 20 The publication rate was lower than 60% in 30 out of 41 surveys, and a rate higher than 80% was reported only by three studies, which were based on selected samples of randomized controlled trials funded by the US National Institutes of Health,21 by the UK National Institute for Health Research Health Technology Assessment programme,22 or approved by John Hopkins institutional review boards.23 The vaccine trials that we evaluated were mainly sponsored by industry. As compared with the above literature, our estimate that results on approximately 80% of the participants can be published in journals four years after completion seems favorable. Previous studies were mostly performed before posting of the results in ClinicalTrials.gov started being adopted on a substantial scale. Our data show that posting of results in the registry can help improve the completeness of available evidence and offer some reassurance that the majority of trial results on vaccines do become available, but the wait may be long. However, timeliness and relevance of the evidence to current epidemic dynamics is of importance in this area of medicine, and old trials for some vaccines may often be of little value, even if published.24

Problems with timeliness and relevance are highlighted strongly by pandemics that come and go. In a previous analysis we found that two years after the emergence of the influenza 2009 H1N1 pandemic less than 30% of the 73 registered randomized trials on the potential vaccines had been published, representing 38% of the randomized sample size.7 The present update showed a relevant improvement, as four years after trial completion the results of approximately 80% of the randomized participants had been published (90% including results in ClinicalTrials.gov). However, data published more than four years (or even one year) after a pandemic (and vaccine distribution) are already of little or no value. For the other four vaccines that we assessed in our current study, problems of timeliness may not be as acute as for H1N1, but information can still be time sensitive, as it will affect guidelines, recommendations, and often annually renewed major decisions on the use of these vaccines for public health, as well as on the choice of the best, most effective, and most safe formulations, when multiple formulations of the same vaccine are available for use.

Policy implications

There are several ethical, legal, economic, and scientific reasons why clinical trial results should be published.5 Both the US Federal Policy for the Protection of Human Subjects and the Declaration of Helsinki25 acknowledge that investigators and sponsors have an ethical obligation to study participants to publish trial results.26 27 Reporting results has been mandatory for many trials in the United States since 2007.28 Unpublished trials produce no scientific and social benefits, and their expenses, often large, are wasted.13 14 29 30 We suggest that the rationale for the publication of trial results should be extended to enforce not only publication but also timely publication or registration of these valuable results.

A randomized trials agenda where only fragments of the data are available may lead to a biased literature.5 6 When numerous intervention formulations are typically developed, it is not easy to extrapolate inferences from one formulation to another given the missing data.2 31 Also, there is a substantial literature in various specialties that trials with non-significant, less favorable, or even “negative” results are more likely to be unpublished or published late compared with significant trials or trials with results that are in line with investigators’ hypotheses.7 14 16 17 19 21 23 32 33 34 35 36 37 Furthermore, half of the 16 studies on the topic documented a lower likelihood of publication for the trials sponsored by industry.10 23 36 38 39 40 41 Vaccine trials are largely dominated by industry sponsorship. However, in our sample, we found no evidence of a longer time from completion to publication for such trials. In our sample, only four published trials reported negative findings (five including the trials with results in ClinicalTrials.gov), thus no meaningful analysis of the delay to publication according to trial findings can be attempted. However, we did observe a significantly lower proportion of negative and mixed findings in industry sponsored trials. Also, the extremely low proportion of negative results suggests that selective reporting biases favouring the publication of trials with positive results and positive analyses are possible, or even likely. When non-publication is considerable, published articles, as well as early reviews or meta-analyses that incorporate them, may be unreliable and overestimate the benefits of an intervention.7 42

It is certainly valuable that the results of the primary outcomes for 42 of the 179 unpublished trials (23%) were posted in ClinicalTrials.gov (as well as 77 of the 176 published trials). Posting results on registry websites does not negate the importance of peer reviewed publication,43 44 but our data show that ClinicalTrials.gov can serve a useful role in enhancing the completeness of available information.12

Some considerations on the search for published reports are needed. A previous survey20 found that a substantial proportion of randomized controlled trials (40-45%) is still being published without the reporting of a trial registration code, thereby weakening the ability of the researchers to identify multiple publications of the same trial, cross check the published report with the original design, and investigate selective reporting.34 Another study found that 16/35 (46%) vaccine trials, published in 2006-11, did not report the registration code.9 In our analysis we “easily” retrieved (by just typing the trial registration code in Scopus or PubMed or following a direct link from the ClinicalTrials.gov registry page) 132 of the 176 published reports. To find the other 44 publications we had to perform more challenging searches, using multiple combinations of title words and trial characteristics, for each of the remaining 223 completed trials. Notably, of the 44, 17 simply did not report the registration code (despite journal guidelines), whereas 27 did report the registration code but this was not indexed. In most cases, this was probably due to the inclusion of the registration code outside the formal text of the paper (that is, in the acknowledgments section). We thus not only support a stricter adherence to the 23rd item of the CONSORT checklist (reporting trial registry name and number45), but we also suggest that this should be reported in the abstract to avoid missing indexing.

Strengths and limitations of this study

Some limitations should be acknowledged. Firstly, we may have missed some published reports. However, as discussed, given the efforts to identify publications, any missed papers are unlikely to be identified during routine or even systematic searches.10 Secondly, it is possible that some additional trials were not registered. The publication rate of unregistered vaccine trials, if they exist, is unknown, but such trials are likely to be less influential in the current environment where registration is widely accepted. One study found that 39% of randomized controlled trials published in a Medline sample were not registered in ClinicalTrials.gov.20 Thirdly, registry information including sample size and time of completion are inconsistently updated.5 7 46 We did use some form of adjustment for the date of completion, and the estimates on both sample size and time to completion of unpublished studies must be considered an approximation. However, when we repeated multivariable analyses excluding the trials without a completion or starting date (n=14), the results were similar. Fourthly, even when data are reported in registry entries, these may not be accurate, and primary outcomes or sample size might differ between registry entries (in their various versions) and published reports.47 However, we examined the history of changes of the registry entries of a random sample of 30% of the 176 published trials (n=53): compared with registry entry, we found a substantially different sample size (>10% or ≤10%) in seven published trials, and only one trial reported a different primary outcome. Such a small rate of variation in the primary outcome could be explained by the relatively high standardization of efficacy or immunogenicity outcomes in the vaccine discipline.

Finally, 75 trials in our sample (19%) were registered after the study start (n=49 within three months), which may have introduced some bias.6 However, we repeated all analyses excluding such trials with similar results (details available from the authors).

Conclusions

The amount of randomized evidence on vaccines that remains unpublished may be lower than that of other medical specialties, but several trials had no results published or posted for many years after their completion. Actions are required to ensure timely public dissemination of trial data in published reports that can be easily linked to the trial registration codes. Given that the findings of vaccine trials may require an even prompter dissemination than other drug trials, a different publication model—including posting the main results on trial registries immediately and even before journal peer review—may be reasonable and encouraged. Also, since the approval process for vaccines and other drugs differ, the publication of findings could be linked more closely to the regulatory review.

What is already known on this topic

  • For epidemic diseases, delayed or non-publication of randomized trials results may distort the available evidence that is used for recommendations, resource allocation, drug/vaccine stockpiling, and other public actions

  • A growing body of evidence indicates that a significant proportion of randomized controlled trial (RCT) results remains unpublished, or get published with major delay; only two studies focused on vaccines

  • A first analysis did not assess delay to publication; in the other study, the publication was evaluated for delay of RCTs on a single vaccine after a relatively short time from the onset of a pandemic (two years)

What this study adds

  • Most vaccine trials get published eventually or have results posted in ClinicalTrials.gov, but delays of several years to publication are common

  • The publication to delay between non-industry and industry sponsored trials did not differ, but trials not sponsored by industry were more likely to report negative findings

Notes

Cite this as: BMJ 2014;348:g3058

Footnotes

  • Contributors: All authors participated in the design, analysis, and interpretation of the study. LM, PV, and JPAI were involved in all phases of the study. JPAI and LM led the statistical analysis and assisted MD, LC, MEF, and CD in data extraction. MD, MEF, and CD carried out the methodological quality assessment. LM, MEF, and JPAI wrote the manuscript. LM is the guarantor for all data.

  • Funding: This study was not funded.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: no support from any organization for the submitted work; no financial relationships with any organization that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

  • Ethical approval: Not required.

  • Data sharing: No additional data.

  • Transparency: The guarantor affirms that the manuscript is a honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/.

References

View Abstract