Intended for healthcare professionals

CCBYNC Open access
Research Special Paper

Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis

BMJ 2024; 384 doi: https://doi.org/10.1136/bmj-2023-077192 (Published 31 January 2024) Cite this as: BMJ 2024;384:e077192

Linked Editorial

Use of generative artificial intelligence in medical research

  1. Conner Ganjavi, medical student123,
  2. Michael B Eppler, medical student123,
  3. Asli Pekcan, medical student123,
  4. Brett Biedermann, medical student123,
  5. Andre Abreu, professor of urology123,
  6. Gary S Collins, professor of medical statistics4,
  7. Inderbir S Gill, professor of urology123,
  8. Giovanni E Cacciamani, professor of urology and Radiology123
  1. 1Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
  2. 2USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
  3. 3Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
  4. 4UK EQUATOR Centre, Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
  1. Correspondence to: G E Cacciamani Giovanni.cacciamani{at}med.usc.edu (or @Cacciamani_MD on Twitter)
  • Accepted 29 November 2023

Abstract

Objectives To determine the extent and content of academic publishers’ and scientific journals’ guidance for authors on the use of generative artificial intelligence (GAI).

Design Cross sectional, bibliometric study.

Setting Websites of academic publishers and scientific journals, screened on 19-20 May 2023, with the search updated on 8-9 October 2023.

Participants Top 100 largest academic publishers and top 100 highly ranked scientific journals, regardless of subject, language, or country of origin. Publishers were identified by the total number of journals in their portfolio, and journals were identified through the Scimago journal rank using the Hirsch index (H index) as an indicator of journal productivity and impact.

Main outcome measures The primary outcomes were the content of GAI guidelines listed on the websites of the top 100 academic publishers and scientific journals, and the consistency of guidance between the publishers and their affiliated journals.

Results Among the top 100 largest publishers, 24% provided guidance on the use of GAI, of which 15 (63%) were among the top 25 publishers. Among the top 100 highly ranked journals, 87% provided guidance on GAI. Of the publishers and journals with guidelines, the inclusion of GAI as an author was prohibited in 96% and 98%, respectively. Only one journal (1%) explicitly prohibited the use of GAI in the generation of a manuscript, and two (8%) publishers and 19 (22%) journals indicated that their guidelines exclusively applied to the writing process. When disclosing the use of GAI, 75% of publishers and 43% of journals included specific disclosure criteria. Where to disclose the use of GAI varied, including in the methods or acknowledgments, in the cover letter, or in a new section. Variability was also found in how to access GAI guidelines shared between journals and publishers. GAI guidelines in 12 journals directly conflicted with those developed by the publishers. The guidelines developed by top medical journals were broadly similar to those of academic journals.

Conclusions Guidelines by some top publishers and journals on the use of GAI by authors are lacking. Among those that provided guidelines, the allowable uses of GAI and how it should be disclosed varied substantially, with this heterogeneity persisting in some instances among affiliated publishers and journals. Lack of standardization places a burden on authors and could limit the effectiveness of the regulations. As GAI continues to grow in popularity, standardized guidelines to protect the integrity of scientific output are needed.

Introduction

In the past decade, advances in artificial intelligence (AI) have spurred the creation of many AI based tools for use in research.123 Generative AI (GAI) utilizes large language models to generate unique text or image based responses to user prompts, and it has gained popularity since the release of generative pretrained transformers (GPT)—namely, ChatGPT launched by the AI research organization OpenAI on 30 November 2022.4 Within two months, ChatGPT had reached 100 million users monthly, at the time making it the fastest adoption of technology in history.5 Now other similar products are being developed by major technology companies, such as Google with Bard and MedPalm and Microsoft with Bing Chat.678

The advent of this new technology has resulted in a major upsurge in interest from academia, accompanied by a pronounced acceleration in potential utilization. To date, more than 650 research articles and editorials have discussed the applications and pitfalls of GAI, many of which use GAI itself within the research and writing process. Within the context of research and academic writing, studies frequently mention the ability of GAI to improve grammar and vocabulary,9 translate text into various languages,10 propose novel research ideas,9 synthesize large amounts of information,11 suggest statistical tests,12 write code and novel textual content,1012 and streamline the overall research process.13 Authors have been warned that GAI cannot be held accountable for its output, which, among several pitfalls, include the risk of inaccuracy, bias, and plagiarism.1113

In December 2022, Nature published the first article discussing concerns about the use of ChatGPT and GAI in academic writing.14 Since then, journals and publishers have begun updating their editorial policies and instructions to authors to provide guidance on how to disclose the use of GAI in academic research. Science published an article in January 2023 stating its decision to prohibit the use of GAI to generate text, figures images, or graphics in the writing process, and it views violation of the policy as constituting scientific misconduct.15 Other journals have allowed the use of GAI with restrictions and the requirement for full disclosure.16 The Committee on Publication Ethics (COPE), an organization comprised of editors, publishers, universities, and research institutes that helps inform publication ethics across all academic disciplines,17 released a position statement on AI tools in research publications in February 202318 emphasizing that “AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work,” while also suggesting ways to disclose AI use and emphasizing that authors are accountable for the work produced by AI tools.18

Even if the current COPE statement on AI is promptly endorsed by journals (eg, Journal of the American Medical Association1920) and editorial associations (eg, World Association of Medical Editors (WAME)21), it does not provide a comprehensive and functional set of recommendations on key aspects to guide responsible GAI tool usage in scientific writing. Specifically, the statement fails to address certain potential pitfalls of these tools and does not offer a standard disclosure statement detailing specific elements to be included. This gap in standardization had led to a variety of bespoke guidance formulated by individual journals and publishers for dealing with AI usage in scientific publications.22

We examined the extent and nature of guidelines for authors pertaining to the use of GAI across the top 100 largest academic publishers and top 100 highly ranked scientific journals. Our objective was to identify shared characteristics, any methodological details on how guidelines were developed, and variations in the guidelines, with the goal of assessing commonalities and divergences in guidance on GAI in academic publishing.

Methods

Publisher selection and data acquisition

We utilized the list in Nishikawa-Pacher’s study, which identified and ranked the top 100 publishers by number of affiliated journals in their portfolio.23 The largest publisher on the list produced 3763 journals, and the smallest publisher produced 76 journals. In total, these 100 publishers are responsible for the publication of 28 060 journals. Nishikawa-Pacher’s study23 suggested that 30 of these publishers may be considered “predatory,” defined by Jeffrey Beall, creator of Beall’s list of potential predatory journals and publishers, to identify those that “publish counterfeit journals to exploit the open-access model in which the author pays. These predatory publishers are dishonest and lack transparency. They aim to dupe researchers, especially those inexperienced in scholarly communication.”24 Although the exact number of journals worldwide is unknown, one estimate suggests around 45 000,25 thus the current study captured around two thirds of journals as represented by the top 100 largest academic publishers.

We manually searched the official website for each publisher for author guidance pertaining to AI tools broadly, including those based on GAI. We defined GAI guidelines as any guidelines mentioning the use of GPTs, large language models, or GAI. Initial data collection took place during 19-20 May 2023 (six months after the launch of ChatGPT) and a second updated search took place during 8-9 October 2023 to capture additional guidelines and changes in guidance over time. Data collection was completed within a 24 hour period to ensure an accurate snapshot of the available guidelines. We determined the variables of interest before data extraction. After training and piloting the data extraction form, two reviewers (AP and BB) independently collected the data. Discrepancies were resolved by a third reviewer (CG) under the supervision of the senior author (GEC). If a publisher’s website was in a non-English language, we translated the author guidelines into English using Google Machine Translate, as previously done.26 If a publisher failed to provide guidance on GAI, we evaluated at least three of its subsidiary journal websites for the existence of shared guidelines as a proxy for the publisher’s policy. Data extraction focused on determining the presence of author guidelines specifically referencing the use of GAI, as well as the date the guidance was released and whether the guidelines mentioned any validated reporting criteria for the use of GAI in scientific research.

Journal selection and data acquisition

On 4 May 2023, we selected the highest ranked 100 science journals by the H index from Scimago.org (https://www.scimagojr.com), as previously done.27 The highest ranked journal had a H index of 1331 and the 100th ranked journal had a H index of 356. An H index is calculated based on the number of articles published by a given journal and how many time the articles have been individually cited. Unlike the journal impact factor, which fluctuates often and was noticeably affected by the covid-19 pandemic,28 the H index is considered to be more stable over time.29 Thus we utilized the H index to help best represent the top journals across scientific disciplines with the most sustained influence and leadership in the specialty.

The official website for each journal was manually searched for guidelines pertaining to AI tools as described above. The data collection took place within the same period and using the same methods as for the publishers. If a journal did not provide guidance on the reporting of GAI, we used the GAI guidelines provided by the journal’s publisher as a proxy only if the author guidelines or ethics page directly recommended viewing or was linked to the publisher’s guidelines. Similar to the data collection on publishers’ guidance for authors, data collection for the journal guideline for authors focused on determining the presence of author guidelines specifically referencing the use of GAI, as well as the date the guideline was released and whether it mentioned any validated criteria for the use of GAI in scientific research. A subanalysis was conducted on journals that listed “medicine” in their subject area according to the reporting of subject areas in Scimago, focusing only on the top 100 highly ranked journals. We also included journals in the multidisciplinary category that publish medical papers.

Data presentation

We used descriptive statistics to summarize the data, reporting frequencies and percentages for all categorical variables. Charts and tables are used when appropriate to help with the interpretability of the data.

Patient and public involvement

As this project primarily focused on assessing journals’ and publishers’ guidelines as they relate to authors, we did not directly involve patients or the public in the completion of this study. We did ask members of the public to read our manuscript after submission.

Results

The AI guidelines identified all referred to GAI based models or the generative ability of AI, instead of discussing the use of AI more broadly. Of the top 100 largest publishers, 24% had released guidance on GAI. Sixty three per cent (n=15) of the publishers with GAI guidelines were in the top 25 publishers. Additionally, 56% of the publishers cited membership of COPE. Of the 100 highest ranked journals, 87% had released GAI guidelines. Eighty two per cent of the journals cited membership of COPE. Several of the 100 highest ranked journals shared the same publishers; the most represented publishers included Springer Nature, with 19% of its journals in the top 100, followed by the American Chemical Society (10%) and Elsevier (7%).

Author guidance on GAI: Top 100 largest publishers

Twenty four (24%) of the publishers provided specific guidance on AI for authors (table 1 and fig 1). Ten (42%) of the publishers with GAI guidelines also provided a direct link to the COPE position statement on use of AI in research publications. Among the publishers with specific guidelines, 23 (96%) provided information on including GAI as an author, and all 23 explicitly stated that GAI may not be listed as an author. Two publishers (8%), Emerald and BioMedCentral, included a policy to prohibit the submission of AI generated images. Two (8%) publishers, Cambridge University Press and IEEE, indicated that their guidelines only applied to the writing process. Of specific GAI tools referenced, 15 (63%) publishers mentioned large language models and 13 (54%) mentioned ChatGPT. One publisher, Frontiers, mentioned other large language models and generative image models in addition to ChatGPT.

Table 1

Author guidelines on GAI in the 100 largest academic and science publishers

View this table:
Fig 1
Fig 1

Types of recommendations and types of disclosures for generative AI recommended in author guidelines for top 100 largest academic publishers and top 100 highly ranked scientific journals. A subanalysis was performed of journals listed in the medicine and multidisciplinary subject area of Scimago. AI=artificial intelligence; COPE=Committee on Publication Ethics; GAI=generative artificial intelligence; GPT=generative pretrained transformer; LLMs=large language models

Guidelines for disclosure generally included a combination of whether to report, where in the manuscript to report, or what details to report. All 24 (100%) publishers with guidelines required disclosure in some form, whereas only 10 (42%) specifically highlighted the term “disclose” to describe this process. When publishers provided recommendations on where in the manuscript to include the disclosure, the most common location was the methods section (n=17, 71%), acknowledgements section (n=13, 54%), and cover letter (n=2, 8%). As to what to disclose, 18 (75%) of the publishers included guidelines on what details should be provided in the disclosure, such as the name, model, and version of the AI tool and the purpose for which AI was used. Only one publisher, Elsevier, provided a specific disclosure template to use and advised that it be included in a new, independent section of the manuscript. Finally, 17 (71%) of the publishers stated that authors were responsible and accountable for the output produced by AI tools. None of the proposed guidelines were listed as being developed using a formal guideline development process.30

Author guidance on GAI: 100 highest ranked journals

Of the 100 highest ranked journals, 87 provided specific guidelines on disclosure of GAI for authors (table 2 and fig 1). All of the top 25 (100%) highest ranked journals published AI guidelines, whereas 21 (84%) of the 25 in the second, 22 (88%) in the third quarter, and 20 (80%) in the fourth quarter endorsed guidelines. In addition to journal specific guidelines, nine (10%) journals also provided a direct link to COPE’s position statement on the use of AI in research publications. Of the 87 journals, only Science explicitly prohibited any use of GAI tools in the preparation of a manuscript. Other journals that explicitly prohibited GAI in some capacity included Lancet, which limited the use of GAI for anything other than improving the “readability and language of the work,” and Blood, which allowed graphical but not textual GAI outputs in submitted work. Eighty five (98%) of the journals had specific guidelines for including GAI as an author. All explicitly stated that AI should not be listed as an author. Nineteen (22%) journals indicated that their GAI guidelines only applied to the writing process. As regards specific GAI tools, 48 (55%) journals mentioned large language models and 44 (51%) explicitly mentioned ChatGPT. Four (5%) journals cited other GAI tools besides ChatGPT.

Table 2

Author guidelines on GAI in top 100 academic and science journals

View this table:

Guidance for disclosure included a combination of whether, where, or what to disclose. Of 70 journals with GAI guidelines, 86 (99%) required some type of “reporting,” “documenting,” or “noting,” with Science being the only journal that did not mention disclosure. Forty (46%) journals specifically used the term “disclose.” The journals provided guidance on where in the manuscript to include the disclosure, with the most common locations being the methods (n=56, 64%), acknowledgements (n=43, 49%), cover letter (n=17, 20%), or a new section (n=13, 15%). Thirty five (40%) journals provided recommendations on which details should be included in the disclosure. All 10 Elsevier journals provided a template for the disclosure and advised that it should be included in a new, separate section of the manuscript. Finally, 46 (53%) journals stated that authors were responsible and accountable for the output produced by GAI tools. None of the proposed guidelines was listed as being developed using any formal guideline development process.30

Author guidance on GAI: Medical journals among top 100 journals

Fifty one of the top 100 highest ranked journals could be classified as medical journals according to Scimago. Of the 51 medical journals, 44 (86%) had GAI guidelines for authors (table 3 and fig 1). Four (10%) journals provided a direct link to the COPE position statement on the use of AI in research publications. All (100%) the journals had specific guidelines for including GAI as an author and explicitly stated that AI should not be listed as an author. Five (11%) journals indicated that the GAI guidelines only applied to the writing process. Of the GAI tools referenced, 23 (52%) journals mentioned large language models and 19 (43%) explicitly mentioned ChatGPT. Four (10%) journals cited other GAI tools besides ChatGPT.

Table 3

Author guidelines on GAI in medical journals within top 100 journals

View this table:

The medical journals provided guidance on where in the manuscript to include the disclosure, the most common being the methods (n=31, 71%), acknowledgements (n=22, 50%), cover letter (n=9, 21%), or a new section (n=6, 14%). Nineteen (43%) medical journals provided recommendations on which details should be included in the disclosure. Finally, 26 (59%) of the journals stated that authors were responsible and accountable for the output produced by GAI tools.

Consistency of author guidelines

Overall, 58 of the 100 highest ranked journals reported guidelines or policies for disclosure of GAI use on the journal’s website, of which 12 (21%) linked to GAI guidelines on the publisher’s website. For 43 journals, guidelines were listed solely on the journal’s website (ie, the relevant publisher did not report GAI guidelines and the journals did not link to the publisher). Additionally, for three of 58 (5%) journals, the publisher also reported guidelines, but the journals did not link to the publisher’s website.

Of the remaining 42 journals that did not report GAI guidelines on the journal’s website, 25 (60%) linked to the publisher’s website. Nine (21%) journals did not link to the publisher’s website even though the website listed AI guidelines. Finally, of the 15 journals that provided guidelines on the journal website and had publishers that reported GAI guidelines, two (13%) of the journals during the first search had guidelines that conflicted with those of the publisher. During the second search on 8 October 2023, 12 (80%) of these journals, including the Elsevier family journals, had guidelines that conflicted with those of the publisher.

Discussion

Information on the use of GAI varied substantially among the top 100 largest academic publishers and top 100 highest ranked scientific journals, with considerable heterogeneity and conflicting guidance. We found that less than a quarter of the publishers and almost 90% of the journals currently have guidelines in place. All the identified AI guidelines mentioned GAI models or discussed the generative ability of AI. Broader AI applications were not discussed, indicating that the journals and their publishers likely developed their own policy or author guidelines in response to the growing popularity of GAI. The information detailed in the guidance and the recommendations posted by each publisher or journal showed notable diversity. Based on the EQUATOR (enhancing the quality of and transparency of health research) network registered guidelines under development or the list of published reporting guidelines, none of the current journal or publisher guidelines were developed using a formal Delphi consensus based process.

Publishers’ guidelines

Out of the top 100 largest academic publishers, only 24% reported guidelines for the use of GAI in research, and most of these were in the top quarter of publishers by journal count. Our analysis showed that the presence of GAI guidelines was independent of the type of publisher (table 1). Of the publishers that did have GAI guidelines, standardization was limited. Although most of the publishers cited their adherence to COPE guidelines, less than half provided links to the COPE position statement on use of AI.18 Of the publishers that did provide a link, individual guidance did not always align with the COPE statement, creating potential confusion for authors.

Despite substantial heterogeneity in publishers’ guidance, two major themes were identified. Firstly, publishers consistently prohibited GAI from being an author—namely because GAI tools cannot take responsibility for content created; a standard principle of authorship and one consistent with COPE’s position.18 Secondly, publishers encouraged the disclosure of GAI use. The nature of this disclosure varied substantially across publishers’ guidelines, such as the appropriate place for it to be cited in the manuscript. Most publishers with GAI guidelines specified which details to include in the disclosure by requesting a variety of reporting criteria, such as the model’s name, version, source, description, and usage. Elsevier provided a standardized template that included the name of the GAI tool or service used and the reason for its use.

Furthermore, the types and uses of GAI tools to which the guidelines applied varied among the publishers. For instance, while some publishers’ guidance only pertained to “AI generated text,” others also encompassed the production of images and data analysis. Several of the guidelines provided vague examples of use, such as “scholarly contributions,” “content creation,” and “preparation of a manuscript,” introducing another element of confusion for authors. Additionally, disclosure criteria for spelling and grammar raise questions around whether tools that integrate GAI technology primarily for that purpose must be held to the same standards for disclosure. This question will need to be answered, as programs such as Grammarly31 and Microsoft Office32 are implementing GAI into their spelling and grammar tools.

Finally, not all of the publishers explicitly required authors to take accountability for the output produced by GAI. This may cause confusion about the responsibility and ownership of content generated by AI tools and should be accounted for by publishers.

During the five months between our two searches of publishers’ guidelines, the number of publishers that reported guidelines increased by 40%, showing the continued interest in development of guidelines. Furthermore, the newer guidelines showed an increased emphasis on image generation and specific disclosure criteria. Although Elsevier updated its guidelines to include a section on image generation, its top journals have yet to update their guidelines to include this information even while providing a link to the publisher; a further illustration of the need for standardized guidelines to ensure proper adherence and minimal confusion for authors.

Journals’ guidelines

Most of the 100 highest ranked journals provided guidelines on the use of GAI in scientific research. Many of these journals shared the same publishers and were produced by large publishing houses, which also have guidelines and policies for GAI. Similar themes to the publishers’ guidelines applied to the guidance for journals, with great variability across guidelines and little standardization. Compared with the publishers, a lower percentage (10%) of journals linked to the COPE position statement on AI.18 Of the journals that did have a link, journal specific guidance did not entirely align with the COPE position statement. Similar to the publishers’ guidelines, the two most consistent themes were that GAI should not be listed as an author and that disclosure of the use of GAI is required. Similarly, variability was found in where to disclose this information, what details to include, and in which format. Journals published by Elsevier provided consistent guidelines requiring the use of a standard disclosure template. Additionally, across the journals, guidelines on the uses of GAI were discordant, with some specifying a combination of the writing process, image generation, or data analysis and collection, and others specifying none. Similar to the publishers’ guidelines, several journals utilized more generalized terms to describe which components of submissions were bound by the GAI guidelines. Lastly, roughly half of the journals specified that authors were accountable for the output produced by GAI. Aside from the Elsevier journals, which implemented a uniform template for disclosure, other examples of journals with well crafted guidelines included BMJ, Physiological Reviews, and PLOS ONE. These journals detailed the circumstances for which disclosure of AI is required, provided thorough and specific criteria for disclosure, including where to cite the information, and emphasized that authors must take accountability for the work resulting from use of GAI tools.

During the five months between the two searches, the number of journals reporting guidelines increased by nearly 25%, primarily those in the lower half of the top 100 highest ranked journals. Similar to publishers’ guidelines, the more recent journal guidance discussed image generation and the role of GAI in the review process. A substantially smaller percentage of the medical journals restricted their guidelines exclusively to the writing process compared with the broader list of scientific journals.

Sources of heterogeneity

We identified sources of heterogeneity in GAI guidelines among the publishers and journals, including in the dissemination of GAI related guidelines. Although some journals not only presented their own GAI guidelines but also provided a direct link to the identical publishers’ guidelines, there were instances where journals issued guidance without providing such a link. Conversely, certain journals solely provided a link to their publisher’s guidelines. This discrepancy results in a lack of centralization of information on the use of GAI. Consequently, the responsibility falls onto authors to seek out and understand the available guidelines. This setup potentially allows authors to inadvertently misuse GAI tools in scientific writing owing to an incomplete understanding of the regulations imposed by journals or publishers.

In addition to a non-centralized location for information on GAI use, we also found several instances of competing recommendations and guidance. The guidelines of some journals, such as the Journal of the American College of Cardiology, contradicted what was cited by their publisher. These inconsistencies pose challenges as authors seek out the appropriate guidelines and must decide which standards to follow.

We also found heterogeneity in the types of words used. Guidelines frequently used terms such as “disclose,” “report,” “describe,” “acknowledge,” and “document” interchangeably when instructing authors on how to present the use of generative AI in their manuscripts. This can lead to confusion, as these words have discrete definitions—for example, a disclosure of a conflict of interest is not the same as an acknowledgement of a contributor in the context of scientific publishing.

Several journals and publishers did not stipulate that authors were accountable for GAI outputs. The COPE position statement on AI asserts that authors are “fully responsible” for their work, including any portion produced by AI. This is important because, as publishers such as Elsevier and SAGE have noted, GAI can produce inaccurate, biased, or misleading outputs.1113 GAI tools are known to “hallucinate” and fabricate unfounded information.1113 Additionally, utilizing GAI tools introduces the risk of plagiarism, when text is duplicated from data sources.1233 Another element of complexity, and one acknowledged in the COPE position statement, is that AI tools are “non-legal entities.” AI tools cannot participate in matters of conflicts of interest, copyright, and license agreements. Therefore, they should not qualify as authors and take responsibility for submitted work. In fact, the issue of copyright and ownership of outputs generated by large language models will likely raise many questions that require discussion and resolution among the relevant stakeholders. Large language models are trained on vast amounts of data, potentially with various regulations and restrictions on access and sourcing. Given these concerns and the rapid adoption of GAI, publishers and journals have responded quickly to develop guidelines on proper use. Journals and publishers have recommended disclosing GAI use in the acknowledgments section. However, since AI is not human, lacks agency, and is unaccountable, there is hesitation to mention GAI in an acknowledgments section alongside collaborators.34

Heterogeneity, including the incongruence of GAI guidelines between journals and publishers, misalignment with the COPE position statement, and unclear terminology around the disclosure of GAI use, could create confusion for authors and reviewers when incorporating GAI tools into their research. A lack of clear and standardized recommendations along with frequent updates to guidelines places responsibility on authors to seek out “correct” guidance, while also diminishing the ethos of the guidance by hindering the ability of authors to appropriately follow the guidelines. Standardized recommendations would improve transparency and accountability surrounding GAI use in academia and scientific research. Although with time GAI guidelines could evolve to be discipline specific, currently during this early adoption phase authors and editors would benefit from a set of broadly encompassing, cross discipline, inclusive guidelines. A cross discipline, global initiative CANGARU (ChatGPT, generative artificial intelligence, and natural large language models for accountable reporting and use guidelines) is ongoing and the results are awaited.22 Additionally, patients may become aware of the growing use of AI in scientific research, and standardized guidelines could increase their trust in the literature.

Limitations and future directions

This bibliometric analysis represents a snapshot at six months and 10 months after the rise in popularity of ChatGPT. Guidelines were developed rapidly and specifically in response to the use of GAI and must adapt to the introduction of other GAI tools. As a result, it is likely that GAI guidance will continue to evolve as our understanding of the technology improves and as greater emphasis is placed on creating policies for GAI use, as already seen in the frequent inclusion of GAI tools besides ChatGPT in guidelines developed more recently. As GAI models continue to develop, different guidelines may need to be implemented to regulate the scope of the technology at that time. Weaknesses of the current study include the limited number of publishers and journals included. It is possible that other publishers or journals already have a higher standard of guidance on GAI use. Some publishers lacked policies on their websites, and the shared subsidiary journal guidelines that we used as proxies may not always be the perfect solution for published guidelines. Furthermore, scholarly societies associated with most of the top 100 highest ranked journals (see table 2) also play a role in the development of guidelines for GAI use. Evaluating the full effect of these societies would have been challenging given each society’s limited representation in the top 100 highest ranked journals, and as such a follow-up study evaluating their impact is needed. Another limitation is that this study was largely qualitative and therefore prone to authors’ subjectivity. However, to minimize this limitation we used a structured system of multiple reviewers as well as supervision from coauthors at several levels.

Conclusion

Substantial heterogeneity was found in guidance on the application of GAI use in academic research and scholarly writing. To our knowledge, none of the proposed recommendations were formulated through a structured consensus based guideline development process. This scenario highlights an urgent need for the establishment of cohesive, cross disciplinary policies. Such guidance should be crafted in a structured manner, integrating the perspectives of all stakeholders. This approach is crucial to counteract the Babel Tower phenomenon—that is, the confusion and lack of standardization that results from individual parties creating their own unique regulations.

What is already known on this topic

  • Since late 2022, generative artificial intelligence (GAI) tools, including ChatGPT, are being widely utilized in academic writing and research

  • Stakeholders in the publishing ecosystem, including members of publishing houses, journals, and regulatory agencies are discussing ways of overseeing this new technology and ensuring its safe use

What this study adds

  • Many of the top 100 largest academic publishers and top 100 highly indexed scientific journals have developed guidelines for authors on the use of GAI tools

  • The guidelines showed substantial heterogeneity about when GAI can be used and the specifics of how authors should disclose the use of GAI

  • This variability highlights the necessity of developing cohesive, cross disciplinary guidelines on GAI use

Ethics statements

Ethical approval

Not required.

Data availability statement

Full data are available from the corresponding author at Giovanni.cacciamani@med.usc.edu.

Footnotes

  • Contributors: CG, ME, and GEC contributed to the design and conduct of the study. CG, AP, and BB contributed to data collection and management. CG, ME, and GEC contributed to the analysis of the data. All authors contributed to the interpretation of the data and the preparation, review, and approval of the manuscript. GEC is the guarantor. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

  • Funding: No funding received.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/disclosure-of-interest/ and declare: no support from any organization for the submitted work; no financial relationships with any organizations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

  • The guarantor (GEC) affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained.

  • Dissemination to participants and related patient and public communities: The preliminary study results of this were discussed in a news feature in Nature (Nature 2023;622:234-36). After publication, the research findings will be disseminated through press releases, interviews potentially held with local and national media, social media posts on Twitter, and academic conferences.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

  • Generative artificial intelligence: ChatGPT3.5 was used for the grammatical check of the introduction and discussion paragraphs. The authors validated the output and take full responsibility for the content.

http://creativecommons.org/licenses/by-nc/4.0/

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

References