Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis
BMJ 2024; 384 doi: https://doi.org/10.1136/bmj-2023-077192 (Published 31 January 2024) Cite this as: BMJ 2024;384:e077192Linked Editorial
Use of generative artificial intelligence in medical research
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Dear Editor
The integration of Artificial Intelligence (AI) into scientific research and academic writing mirrors the transformative innovations of our times, necessitating a redefinition of generative capabilities similar to the evolution of search engines. Historically, the bulk of content retrieved through search engines, derived from keyword queries, served as a foundational layer for scholarly writing and scientific investigation. This perspective underscores the potential utility of AI-generated information, which, while requiring further disclosure, is not inherently unusable for academic purposes. Traditional methodologies disclosed in research papers often detail the search engines and databases employed for literature review, suggesting a precedent for transparency in the use of generative tools.
The advent of Generative AI (GAI) technologies, such as OpenAI's ChatGPT, has highlighted the capacity of AI to synthesize and generalize data across existing databases, presenting a novel tool for research innovation. However, the utilization of AI in academic literature review faces hurdles, including access restrictions by non-open access databases and specific prohibitions by certain publishers, such as the American Chemical Society (ACS), against AI tool searches. Despite these challenges, the potential benefits of AI integration into research methodologies are significant, offering streamlined processes for data analysis and content creation.
A recent bibliometric analysis by Ganjavi et al. (2024) in BMJ reveals that among the top 100 largest academic publishers, only 24% provided guidance on the use of GAI, with 63% of these guidelines emanating from the top 25 publishers. Furthermore, 87% of the top 100 highly ranked journals offered guidance on GAI use, illustrating a burgeoning awareness within the scientific community towards establishing norms around AI's role in research and publication【1】. This developing landscape underscores the necessity for clear guidelines and transparency in employing AI tools, paralleling the evolution seen in search engine utilization for research purposes.
The Committee on Publication Ethics (COPE) has issued statements emphasizing that AI tools cannot meet authorship requirements due to their inability to take responsibility for work submitted【1】. This position highlights the ethical considerations inherent in AI use within academic writing, underscoring the need for accountability and disclosure in the employment of these technologies. The variability in guidelines across publishers and journals further complicates the integration of AI into scientific writing, suggesting a need for standardized practices that maintain the integrity of scientific output while embracing the innovative potential of AI tools.
In conclusion, the integration of AI into scientific research and academic writing represents a significant shift in the generation and utilization of information, akin to the innovation brought about by search engines. While the use of AI-generated content requires further definition and disclosure, the parallels with traditional research methodologies suggest a foundation for its ethical and practical incorporation into scientific endeavors. As the scientific community navigates the challenges and opportunities presented by AI, the development of standardized guidelines and transparent practices will be crucial in harnessing AI's potential for innovation while ensuring the reliability and integrity of scientific research.
Reference:
1. Ganjavi, C., Eppler, M.B., Pekcan, A., Biedermann, B., Abreu, A., Collins, G.S., Gill, I.S., & Cacciamani, G.E. (2024). Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. BMJ, 384, e077192. DOI: https://doi.org/10.1136/bmj-2023-077192
Competing interests: No competing interests
RE: Re: Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis
Dear Editor
We thank Liu et al. for their comment [1] on our study, which reports the results of a bibliometric analysis examining the presence of authors' guidelines related to the use of generative artificial intelligence (GAI) in academia.
Our paper discusses the initial step of a broader effort to develop uniform guidelines for GAI use in research and writing, highlighting the inconsistency in current recommendations across publishing entities, possibly leading to confusion and misuse of GAI-tools like ChatGPT in academia [2].
In their comment [1] Liu et al have correctly and timely highlighted the necessity for common-shared guidelines to solve this issue.
To this end, seven months ago, we established the "ChatGPT and Generative Artificial Intelligence Natural Large Language Models for Accountable Reporting and Use" (CANGARU) project [3-4]. This global, cross-disciplinary project seeks to develop a universally inclusive set of consensus guidelines for the academic use of generative AI, leveraging the Delphi Consensus model. It has garnered the participation of over 3,000 academics from diverse fields worldwide, making it, to our knowledge, one of the largest and most inclusive Delphi consensus efforts in academia. Our initiative stands out for its commitment to representation, welcoming contributors from all academic disciplines without bias towards gender, race and ethnicity, or geographic location.
The Steering Committee is constituted by the Editors-in-Chief and members from the Editorial Boards of preeminent academic journals, in conjunction with representatives from regulatory agencies, publishing entities, and experts dedicated to the formulation of artificial intelligence governance frameworks and guidelines.
Further details about this global, cross-disciplinary initiative can be found in our paper protocol [5], underlining the project's scope and its response to the guideline inconsistency issue regarding the ethical guidelines of this technology that our BMJ article highlighted.
We are eager to share the results of the CANGARU initiative [3-5] and appreciate the support from academics and scientists around the world in this effort to protect the future of academia by ensuring the ethical and proper use of GAI going forward.
References
1 - https://www.bmj.com/content/384/bmj-2023-077192/rapid-responses (access March 6, 2024)
2 - Ganjavi, Conner, Michael B. Eppler, Asli Pekcan, Brett Biedermann, Andre Abreu, Gary S. Collins, Inderbir S. Gill, and Giovanni E. Cacciamani. "Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis." bmj 384 (2024).
3 - https://www.equator-network.org/library/reporting-guidelines-under-devel... (access March 6, 2024)
4- Cacciamani, Giovanni E., Inderbir S. Gill, and Gary S. Collins. "ChatGPT: standard reporting guidelines for responsible use." Nature 618, no. 7964 (2023): 238-238.
5- Cacciamani, Giovanni E., Michael B. Eppler, Conner Ganjavi, Asli Pekan, Brett Biedermann, Gary S. Collins, and Inderbir S. Gill. "Development of the ChatGPT, generative artificial intelligence and natural large language models for accountable reporting and use (CANGARU) guidelines." arXiv preprint arXiv:2307.08974 (2023).
Competing interests: No competing interests