Skip Navigation
Skip to contents

Science Editing : Science Editing

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > Sci Ed > Volume 12(1); 2025 > Article
Editorial
Research and publication ethics with generative artificial intelligence-assisted tools
Cheol-Heui Yun1,2,3orcid
Science Editing 2025;12(1):1-3.
DOI: https://doi.org/10.6087/kcse.362
Published online: February 18, 2025

1Department of Agricultural Biotechnology, Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul, Korea

2Center for Food and Bioconvergence, Interdisciplinary Programs in Agricultural Genomics, Seoul National University, Seoul, Korea

3Institute of Green Bio Science and Technology, Seoul National University, Pyeongchang, Korea

Correspondence to Cheol-Heui Yun cyun@snu.ac.kr
• Received: February 11, 2025   • Accepted: February 12, 2025

Copyright © 2025 Korean Council of Science Editors

This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

next
  • 334 Views
  • 36 Download
Two Nobel Prizes in 2024 were awarded to works related to artificial intelligence (AI) technology. The Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton for enabling machine learning with artificial neural networks, and Demis Hassabis and John Jumper were among the three recipients in Chemistry for protein-folding prediction using AlphaFold [1]. This is a significant milestone, as many people, including myself, recall that Time magazine featured a computer on its cover in 1983, and now we see an AI-based chatbot highlighted in 2023. When Time magazine released the issue on computers [2], it included a photograph of a paper sculpture of a man sitting at a table, looking at a computer screen. The chatbot’s response “to ensure that AI is developed and deployed in a responsible and ethical manner” on the cover surprised many, as it responds coherently (although not always accurately) to virtually any question [3].
AI-generated research output has potential benefits, including better productivity, improved collaboration, data-driven insights, and assistance to numerous users. AI has had a significant impact on the academic sector by addressing informational and analytical challenges encountered by individuals, particularly research scientists. It can help researchers write articles more efficiently, overcome communication hurdles, process massive datasets fast, and detect patterns and trends, resulting in new discoveries and more data-driven research papers. It influences numerous, if not all, sectors of our society, including academic publishing endeavors. Nonetheless, the application of AI in manuscript preparation, data analysis, and the peer review and editorial processes during publication introduces ethical challenges for authors, reviewers, and editors. For instance, AI-generated research articles pose substantial risks to academic publications, including concerns about quality and authenticity, potential plagiarism and misconduct, overreliance on automation, and difficulties identifying its use during the review process unless it is self-identified. While AI can summarize existing data, it cannot imitate human creativity, resulting in poorly written articles and diluted academic literature. Furthermore, AI systems may rely too heavily on automation, limiting original thought and critical involvement with study topics.
As AI-assisted tools increase, it is essential for editors and reviewers to thoroughly distinguish between papers generated by AI and those authored by humans. It is clear that irrespective of the use of AI, the fundamental ethical issues of integrity, transparency, and accountability remain essential. With the growing awareness and concerns regarding bias, transparency, and accountability in AI-assisted tools, it is crucial for journals and publishers to establish and implement ethical policies to protect scientific integrity. Publishers and journals must carefully consider who should utilize it and when and how it should be used throughout the publication process.
The conflicts of interest of stakeholders, including authors, reviewers, and editors, must be explicitly disclosed regarding the utilization of generative AI-assisted tools. AI systems present security challenges and are being integrated into a wide range of technological devices, consequently creating challenges for journals and publishers, such as assisting scams and phishing, data poisoning, and a lack of solutions for misuse [4]. Thus, AI-related research and publication technologies face ethical concerns such as attribution, plagiarism prevention, trustworthiness, ethical education, and empathy integration [5]. It has been reported that generative AI-assisted tools could be used for about 1% to 5% of manuscripts published [6]. Key considerations on the use of AI in scholarly societies are as follows: (1) create, publish, and monitor adherence to journal’s and publisher’s AI policies, and (2) collect and publish declarations of the journal’s, authors’, and reviewers’ use of AI [7]. Recently, in keeping with this tendency, the EU Artificial Intelligence Act of 2024 identifies AI threats as follows: (1) unacceptable (real-time facial recognition, biometric identification); (2) high (education grading, robot surgery, employment processes); (3) limited; and (4) minimum [8].
Authorship is one of the major ethical concerns when AI is utilized to publish articles and books. As AI tools become increasingly capable of generating text and synthesizing research, questions arise about who should be credited as an author. Traditionally, authorship is reserved for individuals who contribute intellectually and substantively to a research project. However, when an AI assists in drafting or revising a manuscript, it challenges traditional notions of authorship. Should the AI be acknowledged, or should the human researcher who used the tool be solely credited? Clear guidelines are needed to ensure that authorship is assigned fairly and transparently. It is important to note that, so far, AI cannot be an author for obvious reasons; for instance, it is not able to be responsible and accountable for legal issues. Some journals and publishers allow authors to use generative AI and AI-assisted tools during the writing process prior to submission, but only to improve the language and readability of their paper. However, it is critical that this should be done with the appropriate disclosure.
On the other hand, the use of AI in academic writing raises concerns about plagiarism and intellectual property, as AI might duplicate or remix notions from earlier work, potentially resulting in accidental plagiarism or copyright issues. AI-generated content must be original and appropriately attributed. Research scientists, journals, and publishers must work together to establish ethical standards, ensuring proper recognition and citation of sources and transparent disclosure of AI contributions. The use of generative AI tools, especially those utilizing extensive datasets, may inadvertently reinforce biases inherent in the training data, leading to potential data poisoning. This may result in biased or unrepresentative research findings, especially in social science or medicine. Researchers employing AI in their studies must carefully evaluate the accuracy and fairness of AI-generated results, ensuring that biases are recognized and addressed. Neglecting this could compromise the credibility of the research and distort scientific knowledge.
AI has the potential to assist authors and enhance the peer review process, yet it also raises questions over user transparency and accountability. Generative AI-assisted tools could assist in reviewing manuscripts for technical errors, plagiarism, and even the clarity of writing, but human expertise remains essential in evaluating the quality and novelty of the research. AI should help human judgment, not replace it, in the review process to avoid missing opportunities to analyze a study’s context and importance. The use of generative AI and AI-assisted tools for peer review in journals may infringe upon confidentiality and authorship rights, leading to its strict prohibition by many journals and publishers. This includes the peer review report, which may contain confidential information about the manuscript and its author(s). It was noted in certain journals and publishers that authors are allowed to utilize generative AI and AI-assisted tools to improve the language and readability of their paper, with appropriate disclosure. Additionally, when generative AI and AI-assisted tools are allowed, it is essential for publishers and journals to equip reviewers and editors with evaluation tools, preferably those developed in-house, but also including licensed options. However, many small-scale journals may find this unfeasible, thereby escalating concerns regarding equity, diversity, and inclusion issues.
In conclusion, AI has great potential to improve academic publishing but presents challenging ethical issues. In an AI-driven research ecosystem, researchers, publishers, and institutions must work together to develop clear ethical principles that protect academic work, assure transparency, and uphold truth and responsibility.

Conflict of Interest

Cheol-Heui Yun serves as the ethics editor for Science Editing since 2020, but had no role in the decision to publish this article. No other potential conflict of interest relevant to this article was reported.

Funding

This work was partially supported by a National Research Foundation of Korea (NRF) grant (No. NRF-2023J1A1A1A01093462).

Data Availability

Data sharing is not applicable to this article as no new data were created or analyzed.

The author did not provide any supplementary materials for this article.

Figure & Data

References

    Citations

    Citations to this article as recorded by  

      Related articles
      Research and publication ethics with generative artificial intelligence-assisted tools
      Research and publication ethics with generative artificial intelligence-assisted tools

      Science Editing : Science Editing
      TOP