Skip Navigation
Skip to contents

Science Editing : Science Editing

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > Sci Ed > Volume 12(1); 2025 > Article
Original Article
Ethical guidelines for the use of generative artificial intelligence and artificial intelligence-assisted tools in scholarly publishing: a thematic analysis
Adéle da Veigaorcid
Science Editing 2025;12(1):28-34.
DOI: https://doi.org/10.6087/kcse.352
Published online: February 5, 2025

School of Computing, College of Science, Engineering and Technology, University of South Africa, Johannesburg, South Africa

Correspondence to Adéle da Veiga dveiga@unisa.ac.za
• Received: November 14, 2024   • Accepted: December 21, 2024

Copyright © 2025 Korean Council of Science Editors

This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

prev next
  • 399 Views
  • 42 Download
  • Purpose
    This analysis aims to propose guidelines for artificial intelligence (AI) research ethics in scientific publications, intending to inform publishers and academic institutional policies in order to guide them toward a coherent and consistent approach to AI research ethics.
  • Methods
    A literature-based thematic analysis was conducted. The study reviewed the publication policies of the top 10 journal publishers addressing the use of AI in scholarly publications as of October 2024. Thematic analysis using ATLAS.ti identified themes and subthemes across the documents, which were consolidated into proposed research ethics guidelines for using generative AI and AI-assisted tools in scholarly publications.
  • Results
    The analysis revealed inconsistencies among publishers’ policies on AI use in research and publications. AI-assisted tools for grammar and formatting are generally accepted, but positions vary regarding generative AI tools used in pre-writing and research methods. Key themes identified include author accountability, human oversight, recognized and unrecognized uses of AI tools, and the necessity for transparency in disclosing AI usage. All publishers agree that AI tools cannot be listed as authors. Concerns involve biases, quality and reliability issues, compliance with intellectual property rights, and limitations of AI detection tools.
  • Conclusion
    The article highlights the significant knowledge gap and inconsistencies in guidelines for AI use in scientific research. There is an urgent need for unified ethical standards, and guidelines are proposed for distinguishing between the accepted use of AI-assisted tools and the cautious use of generative AI tools.
Background
Global associations, such as the Committee on Publication Ethics (COPE) and the World Association of Medical Editors (WAME), have published general ethical frameworks for publishing, along with position statements on using artificial intelligence (AI) tools for academics. COPE recognizes the use of AI in research, but states that AI tools such as ChatGPT (OpenAI) or other large language models (LLMs) cannot be listed as authors and that the use of AI tools as part of the research methodology must be disclosed transparently in the research methodology section of the manuscript. Although they emphasize that researchers remain responsible for the content of their manuscripts, including the sections derived from AI, no further guidelines for the ethical use of AI tools in scientific research are provided [1]. AI policies of publishers published on their websites vary when compared with each other regarding the use of AI technology in research [2]. For example, Sage allows researchers to cite ChatGPT as a source for written text [3], while Elsevier prohibits it [4], and others lack policies about the authorship of AI [5]. Various publishers’ AI policies indicate that they will be updated and revised due to the evolving nature of this area, where requirements need to be modified to align with new developments. The approaches taken by academic institutions for the use of AI also vary [6], and there is a lack of guidelines for the use of AI tools in scientific research [7]. These inconsistencies, along with the general absence of guidelines for AI research ethics, reflect the current knowledge gap regarding the use of generative AI content in scientific research. In light of this gap, there is a need to develop guidelines for using AI tools in scientific research to write manuscripts for publication [5]. Academic researchers must understand the benefits and risks of using AI technologies, as well as what is and is not recognized as part of the scientific research publication process.
Objectives
This paper aims to propose guidelines for AI research ethics in scientific research for publications. These guidelines can inform publishers and academic institutional policies or guidelines to promote a coherent approach to AI research ethics.
Ethics statement
This study did not involve human subjects; however, per the research ethics policy of the University of South Africa, ethical clearance was obtained for conceptual studies (No. 5774).
Study design
This is a literature-based thematic analysis.
Study setting
A documentary research strategy was applied to review a sample of the top 10 journal publishers’ publication policies that address the use of AI in scholarly publications in October 2024. Thematic analysis in ATLAS.ti (ATLAS.ti Scientific Software Development GmbH; https://atlasti.com/) was conducted to identify themes across the documents and the themes were then consolidated for the proposed research ethics guidelines for the use of generative AI and AI-assisted tools in scholarly publications.
Data collection
The top 10 academic publishers in 2024 according to Scilit [8] were the following: Elsevier, Springer Nature, Wiley, MDPI, Taylor & Francis, Institute of Electrical and Electronics Engineers (IEEE), Oxford University Press (OUP), American Chemical Society (ACS), Wolters Kluwer Health, and Sage. The publishers’ policies, guidelines or standards that address AI usage in research and publication were downloaded from their websites during October 2024 and included in the document analysis (Table 1) [3,4,920].
Data analysis
Thematic analysis was conducted using ATLAS.ti to identify the themes and subthemes that occurred across the documents.
Statistical analysis
The results were described without statistical analysis.
The key themes, with extracts from the policy and guideline documents, are included in Suppl. 1. Academic publishers have published author and reviewer policies for the use of generative AI and AI-assisted tools in academic research or are in the process of developing them [21]. While all the academic publishers included in the sample are members of COPE, they still take different positions on implementing generative AI and AI-assisted tools as part of the research process for manuscript publication. This section summarizes the recognized and unrecognized uses of AI based on the top 10 academic journal publishers’ documents included in the scope of this study, illustrating the similarities and differences.
Eight of the publishers, as illustrated in Table 1, have an AI policy or cover AI requirements in their research ethics or author policies; three have published guidelines (MDPI, Wolters Kluwer, Wiley); and two have published AI principles applicable to the operations of the publisher (Wolters Kluwer, Wiley) (Suppl. 1). These academic publishers recognize the value that can be derived from generative AI and AI-assisted tools to improve the efficiency of the research process and productivity, as long as these tools are used ethically and in accordance with the publisher’s policies for AI usage. The publishers acknowledge in their policies that the AI field is developing, they will continue to monitor it, and where required, the policies will be updated, adjusted or refined.
Fig. 1 portrays the main and subthemes derived from the academic publishers’ documents. It consolidates publishers’ requirements and can serve as a guideline for the recognized and unrecognized uses of AI in scholarly publishing. The scores in the blocks indicate the number of publishers that addressed specific aspects in their documents. The maximum score is limited to 10 since only the top 10 academic journal publishers were included in the sample.
Recognized uses of AI in scholarly publishing

Author accountability and responsibility

Human authors are ultimately accountable for their work and must control, review, and edit AI-generated content to ensure it is free from errors, hallucinations (incomplete, incorrect, or false information), misleading information, or biases (Elsevier, Sage, Springer Nature, Wiley, Wolters Kluwer). Authors must ensure the entire manuscript meets standards for scientific rigor, accuracy, integrity, and validity (Elsevier, MDPI, Sage, Taylor & Francis). They remain responsible for originality, compliance with research ethics, and avoiding plagiarism or copyright infringement (Elsevier). References and sources must be checked to eliminate fabrications and hallucinations (Sage). Final approval of the manuscript is the author’s responsibility (for which accountability remains with the human author) (Springer Nature).

Human oversight

Rigorous human review and oversight are necessary (Elsevier, Sage, Taylor & Francis, Wiley) to ensure accuracy, integrity, and compliance with publisher policies, as well as to uphold ethics, quality, and research authenticity (Sage). Without proper verification, AI-generated content could risk using third-party content without appropriate citation or approval, leading to copyright infringement charges [5].
Recognized uses of AI-assisted tools
There is consensus that AI-assisted tools (e.g., Grammarly, Grammarly Inc.) can be applied to the grammar and editing of a manuscript to improve readability and language use, as well as for referencing (e.g., Mendeley, Elsevier; EndNote, Clarivate; or Zotero, Corporation for Digital Scholarship). Only IEEE, MDPI, Sage, and Wolters Kluwer require authors to disclose the use of AI-assisted tools to the editors. AI-assisted tools can also be applied for structure and formatting according to the journal style (Springer Nature, Sage).
Recognized uses of generative AI tools

Pre-writing process

Generative AI tools can be used in the pre-writing phase (Wiley) for idea generation, brainstorming, and identifying and classifying literature (Taylor & Francis). However, this must be accompanied by a rigorous review in which the human author remains responsible and accountable (Wiley).

Research methods

There is consensus that AI and AI-assisted tools can be used as part of the research design or methodology, such as analysis of findings or coding. However, all publishers require transparency in disclosing the use thereof in one of the following:
  • • A cover letter (MDPI, Sage, OUP)

  • • The Acknowledgement section (ACS, IEEE, MDPI, OUP, Sage, Taylor & Francis, Wolters Kluwer, Wiley)

  • • The Methods section of the manuscript (ACS, Elsevier, MDPI, OUP, Sage, Springer Nature, Taylor & Francis, Wiley)

  • • A declaration after the references (Elsevier)

  • • A disclosure template (Sage)

The disclosure must include which generative AI tool was used (name, version, and number), the purpose of its use, and how it was used.
Unrecognized use

Authorship

The publishers all ascribe to the COPE principles, which require that an author must be able to take responsibility for the manuscript content. The COPE position statement on AI explicitly prohibits AI from being listed as an author [22].
Generative AI tools such as ChatGPT do not fulfill authorship criteria, and all 10 publishers explicitly state that ChatGPT and other LLMs cannot be listed as an author or co-author of a manuscript.
The International Committee of Medical Journal Editors has published guidelines for authorship of manuscripts, according to which all four criteria must be met for someone to qualify as an author or co-author [23].
Generative AI tools can contribute to the research process, but they cannot take responsibility for the design, analysis and interpretation of the content. There is consensus that human oversight and review are required for generated content and that only a human has the legal right to approve the final version of the manuscript, thus ensuring the accuracy and integrity of the entire manuscript. AI technologies cannot take responsibility and accountability (Wiley, Springer Nature) for research published in a research manuscript, as they do not have legal standing, cannot hold copyright, cannot assign copyright (Wiley) and cannot manage licensing agreements (Taylor & Francis). Authors ultimately take accountability and responsibility for valid, original research and for the integrity of the research they produce and publish (Elsevier, MDPI, Sage, Springer Nature, Taylor & Francis, Wolters Kluwer, Wiley).
Restrictions

May not draw recommendations and conclusions

Humans must carry out the core tasks of authors—namely, reasoning, drawing scientific conclusions, and making recommendations remain the author’s responsibility (Elsevier, Wolters Kluwer).

May not create, modify, or manipulate original data and results

AI may not alter core research data and results (Sage, Wiley). Using AI to create autonomous content is prohibited (Springer Nature).

May not cite generative AI text

Sage allows citing generative AI tools and provides a format. In contrast, Elsevier explicitly prohibits citing AI as an author, aligning with COPE.

May not generate or change images

Elsevier, Springer Nature, and Taylor & Francis explicitly state that generative AI tools may not be used to create or alter images in manuscripts. ACS, IEEE, and Sage allow this usage with disclosure. MDPI, OUP, Wolters Kluwer, and Wiley require disclosure of any AI-generated content.
Concerns

Biases

Biases in training data and tool development can be replicated in generative AI outputs (Elsevier, Sage, Springer Nature, Taylor & Francis, Wiley), including racism, sexism, and biases against certain populations (Sage).

Quality and reliability

Generative AI tools can introduce hallucinations or misleading information and references (Elsevier, Taylor & Francis, Sage, Springer Nature), which is regarded as scientific misconduct. They may exclude literature that is not freely available (Sage), leading to incomplete information.

Compliance

Generative AI tools can infringe on intellectual property rights if third-party content is used without proper citation (Sage), leading to plagiarism and infringement (Sage, Springer Nature). The rights of participants can be breached if confidentiality and privacy are not upheld (Taylor & Francis). AI cannot manage licensing agreements (Taylor & Francis) or assign copyright (Wiley).

AI detection tools

Publishers may use AI detection tools to identify inappropriate AI use. These tools can flag potential AI writing but are currently unable to identify fabricated references (Sage) resulting from hallucinations by AI tools.
Key results
The study found inconsistencies among top academic publishers’ policies on AI use in scientific research and publications. Thematic analysis revealed that while AI-assisted tools are generally accepted for grammar and formatting, generative AI tools must be used with caution due to risks such as biases and hallucinations. Human authors are responsible for research integrity, and AI cannot be listed as an author.
Interpretation
The proposed guidelines in this paper (Fig. 1) distinguish between the use of AI-assisted tools, a recognized practice in manuscript preparation, and generative AI tools, which pose various challenges to research quality, integrity, and validity. While generative AI tools can be used as part of the research methodology, care must be taken to ensure that the human author still performs key author tasks and that risks relating to bias, discrimination, quality, reliability, and legal requirements are mitigated. The recognized and unrecognized uses of AI, as summarized in Fig. 1, provide academic researchers with guidance for the use of generative AI and AI-assisted tools in research. However, the latest publisher guidelines must also be consulted as this is a developing field where publishers update their requirements as AI tools and their use develop. The guidelines for the use of generative AI from academic societies and international organizations such as COPE, WAME, and the Council of Science Editing are currently not consistent [24]. Thus, there is a need for common ethical guidelines or codes of conduct as a consistent approach for the use of AI tools that the scientific community can follow to uphold research and publication ethics.
Limitations
The study analyzed only the top 10 publishers’ policies, limiting generalizability. As AI rapidly evolves, the proposed guidelines may quickly become outdated. The lack of empirical data and potential selection bias reduce applicability. Disciplinary, legal, and regional differences were not addressed. Furthermore, implementation challenges and ethical complexities were not deeply explored.
Conclusions
This paper identified inconsistencies in guidelines for using AI in scientific research and publications, emphasizing the need for unified ethical standards. Based on an analysis of policies from top academic publishers, it proposes guidelines that distinguish between the accepted use of AI-assisted tools and the cautious use of generative AI tools. The study underscores human authors’ responsibility for research integrity and advocates for a common best practice guideline to ensure consistency and uphold research ethics in the scientific community.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Funding

The author received no financial support for this article.

Data Availability

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Supplementary file is available from the Harvard Database at https://doi.org/10.7910/DVN/JGFYNU.
Suppl. 1. Top 10 journal publisher document extracts for comparison of key themes.
kcse-352-Supplementary-1.xlsx
Fig. 1.
Main and subthemes for the guidelines on recognized and unrecognized use of artificial intelligence (AI) in scholarly publishing. The number of journal publishers with the guidelines are presented in parentheses. a)Represents themes/subthemes of which extracts of the publisher documents are included in Suppl. 1.
kcse-352f1.jpg
Table 1.
Top 10 journal publisher documents included for the document review
Publisher Title Reference
1. Elsevier The use of generative AI and AI-assisted technologies in writing for Elsevier [9]
Generative AI policy for journals [4]
2. Springer Nature AI policy [10]
3. Wiley Best practice guidelines on research integrity and publishing ethics [11]
Wiley’s AI principles [12]
4. MDPI Research and publication ethics [13]
MDPI’s updated guidelines on artificial intelligence and authorship [14]
5. Taylor & Francis AI policy [15]
6. IEEE Submission and peer review policies [16]
7. OUP Ethics [17]
8. ACS Publications Author guidelines [18]
9. Wolters Kluwer Authors’ rights and use of AI tools [19]
AI principles [20]
10. Sage Publications Using AI in peer review and publishing [3]

AI, artificial intelligence; IEEE, Institute of Electrical and Electronics Engineers; OUP, Oxford University Press; ACS, American Chemical Society.

Figure & Data

References

    Citations

    Citations to this article as recorded by  

      Figure
      • 0
      Ethical guidelines for the use of generative artificial intelligence and artificial intelligence-assisted tools in scholarly publishing: a thematic analysis
      Image
      Fig. 1. Main and subthemes for the guidelines on recognized and unrecognized use of artificial intelligence (AI) in scholarly publishing. The number of journal publishers with the guidelines are presented in parentheses. a)Represents themes/subthemes of which extracts of the publisher documents are included in Suppl. 1.
      Ethical guidelines for the use of generative artificial intelligence and artificial intelligence-assisted tools in scholarly publishing: a thematic analysis
      Publisher Title Reference
      1. Elsevier The use of generative AI and AI-assisted technologies in writing for Elsevier [9]
      Generative AI policy for journals [4]
      2. Springer Nature AI policy [10]
      3. Wiley Best practice guidelines on research integrity and publishing ethics [11]
      Wiley’s AI principles [12]
      4. MDPI Research and publication ethics [13]
      MDPI’s updated guidelines on artificial intelligence and authorship [14]
      5. Taylor & Francis AI policy [15]
      6. IEEE Submission and peer review policies [16]
      7. OUP Ethics [17]
      8. ACS Publications Author guidelines [18]
      9. Wolters Kluwer Authors’ rights and use of AI tools [19]
      AI principles [20]
      10. Sage Publications Using AI in peer review and publishing [3]
      Table 1. Top 10 journal publisher documents included for the document review

      AI, artificial intelligence; IEEE, Institute of Electrical and Electronics Engineers; OUP, Oxford University Press; ACS, American Chemical Society.


      Science Editing : Science Editing
      TOP