1Department of Neurology, Kangwon National University Hospital, Chuncheon, Korea
2Interdisciplinary Graduate Program in Medical Bigdata Convergence, Kangwon National University, Chuncheon, Korea
3Department of Pediatrics, Kangwon National University Hospital, Chuncheon, Korea
4Department of Pediatrics, Kangwon National University School of Medicine, Kangwon National University, Chuncheon, Korea
5Department of Neurology, Kangwon National University School of Medicine, Chuncheon, Korea
6Department of Agricultural Biotechnology, Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul, Korea
7Center for Food and Bioconvergence, and Interdisciplinary Programs in Agricultural Genomics, Seoul National University, Seoul, Korea
8Institutes of Green Bio Science and Technology, Seoul National University, Pyeongchang, Korea
Copyright © 2024 Korean Council of Science Editors
This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Conflict of Interest
Cheol-Heui Yun serves as the Ethics Editor of Science Editing since 2020, but had no role in the decision to publish this article. No other potential conflict of interest relevant to this article was reported.
Funding
This work was partially supported by a grant from the National Research Foundation of Korea (NRF; No. NRF-2023J1A1A 1A01093462).
Data Availability
Data sharing is not applicable to this article as no new data were created or analyzed.
Focus | Key point | Reference |
---|---|---|
Ethical behavior of machines | Proposed a paradigm of case-supported principle-based behavior. | [29] |
Emphasis on defining ethical guidelines for autonomous machines. | ||
Ethical decisions by machines | Challenges faced by AI-equipped machines like autonomous cars. | [30] |
Minimal requirement to teach machines ethics based on traditional human choices. | ||
AI governance and techniques | Proposed a taxonomy: | [31] |
(1) Exploring ethical conundrums | ||
(2) Individual ethical decision frameworks | ||
(3) Collective ethical decision frameworks | ||
(4) Ethics in human-AI interactions | ||
Highlighted key techniques and future research directions. | ||
AI ethics discourse | Emphasis on language and terminology in AI ethics through a keyword-based systematic mapping study. | [32] |
Importance of specific concepts and their definitions. | ||
AI ethics in research | Quantitative analysis of ethics-related research in leading AI and robotics venues. | [33,34] |
Emphasis on the need for feasible ethical regulations in the face of rapid tech advancements. | ||
AI and education | Importance of ethical training for students. | [23,35] |
Emphasis on proactive education for ethical decision-making. | ||
Assessment of AI’s impact on educational structures and processes. |
Topic | Description |
---|---|
Authorship and AI tools, COPE position statement [40] | This position statement emphasizes the legal and ethical responsibilities that AI tools cannot fulfill and underscores the need for human authors to take full responsibility for the content produced by AI tools. |
AI and authorship [41] | Levene’s study [41] focuses on the limitations of AI tools in terms of reliability and truthfulness. It asserts that AI tools cannot meet the criteria for authorship and backs the need for human authors to be fully responsible for AI-generated content. |
AI and fake papers [42] | This study discusses the use of AI in creating fake papers and highlights the need for improved means to detect fraudulent research. It implies the need for human judgment, in addition to the use of suitable software, to overcome these challenges. |
The challenge of AI chatbots for journal editors [43] | The guest editorial elaborates on the challenges that AI chatbots pose for journal editors, including issues with plagiarism detection. It suggests the application of human judgment and suitable software to overcome these challenges. |
Trustworthy AI for the future of publishing [44] | The COPE webinar offers a broader perspective on the ethical issues related to AI’s application in editorial publishing processes. It explores AI’s benefits in enhancing efficiency and accuracy, while also emphasizing key ethical concerns such as prejudice, fairness, accountability, and explainability. The webinar highlights the necessity for trustworthy AI in the publication process. |
AI tool | Description |
---|---|
GPTZero (GPTZero Inc; https://gptzero.me/) | Offers clarity and transparency into the use of AI in the classroom, predicts whether a document was written by a large language model, provides AI-generated content detection in educational settings, assesses of AI’s role in creating educational materials. |
AI Text Classifier (OpenAI; https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text) | Specialized for distinguishing between human and AI-written text, utilizing a fine-tuned GPT model. |
Academic AI Detector (PubGenius Inc; https://typeset.io/ai-detector) | Specifically designed to identify AI-generated academic texts, plagiarism detection, academic integrity checks. |
Hive Moderation (Hive; https://hivemoderation.com/ai-generated-content-detection) | Offers real-time identification and origin tracing of AI-generated content, detecting plagiarism, allowing them to enforce academic integrity, supports digital platforms in implementing site-wide bans on AI-generated media, and enables social platforms to create new filters to identify and tag AI-generated content. |
Copyleaks (Copyleaks Technologies Ltd; https://copyleaks.com/) | Scans the internet for potential plagiarism, available in multiple languages, academic integrity, copyright protection. |
Writer’s AI Content Detector (Writer Inc; https://writer.com/ai-content-detector/) | Potential AI-generated content detection and authenticity checks. |
Crossplag AI Content Detector (Crossplag LLC; https://crossplag.com/ai-content-detector/) | Combines AI detection with plagiarism checking for comprehensive content analysis. |
Focus | Key point | Reference |
---|---|---|
Ethical behavior of machines | Proposed a paradigm of case-supported principle-based behavior. | [29] |
Emphasis on defining ethical guidelines for autonomous machines. | ||
Ethical decisions by machines | Challenges faced by AI-equipped machines like autonomous cars. | [30] |
Minimal requirement to teach machines ethics based on traditional human choices. | ||
AI governance and techniques | Proposed a taxonomy: | [31] |
(1) Exploring ethical conundrums | ||
(2) Individual ethical decision frameworks | ||
(3) Collective ethical decision frameworks | ||
(4) Ethics in human-AI interactions | ||
Highlighted key techniques and future research directions. | ||
AI ethics discourse | Emphasis on language and terminology in AI ethics through a keyword-based systematic mapping study. | [32] |
Importance of specific concepts and their definitions. | ||
AI ethics in research | Quantitative analysis of ethics-related research in leading AI and robotics venues. | [33,34] |
Emphasis on the need for feasible ethical regulations in the face of rapid tech advancements. | ||
AI and education | Importance of ethical training for students. | [23,35] |
Emphasis on proactive education for ethical decision-making. | ||
Assessment of AI’s impact on educational structures and processes. |
Question | Description |
---|---|
Why has Elsevier decided that AI and AI-assisted tools cannot be credited as an author on published work? | Elsevier believes that authorship responsibilities, such as integrity and accountability for a published work, can only be carried out by humans. AI lacks the ability to approve the final version of the work and ensure its originality. |
Does this policy cover tools that are used to check grammar and spelling, and reference managers that enable authors to collect and organize references to scholarly articles? | No, the policy does not cover grammar or spelling checkers and reference managers like Mendeley (Elsevier), EndNote (Clarivate), and Zotero (Corporation for Digital Scholarship). These tools can be used without disclosure. The policy applies specifically to AI tools like large language models that can generate scientific works. |
Does this policy refer to AI and AI-assisted tools that are used in the research process, for example to process data? | This policy is specific to AI tools used during the scientific writing process. AI tools used in research design or methods are allowed, and their use should be detailed in the Methods section of the work. |
In which section of the manuscript should authors disclose the use of AI-assisted technologies, and where will this statement appear in the article if it is accepted for publication? | Authors should insert a statement at the end of their manuscript, above the references, to disclose the use of AI tools. The statement should specify the tool used and the reason for using it. |
Can authors use AI-assisted tools to create or alter images that they publish in their work? | AI tools cannot be used to create or alter images in manuscripts, except when this is part of the research design or methods. Any AI-assisted creation or alteration of images must be clearly described in the manuscript. |
How does Elsevier handle copyright if the authors credit an AI or AI-assisted tool in their article? | AI tools do not qualify for authorship, so they do not affect the copyright process. The authors transfer copyright to Elsevier or the society partner for subscription articles and retain copyright for open access articles, granting a license to Elsevier. |
Topic | Description |
---|---|
Authorship and AI tools, COPE position statement [40] | This position statement emphasizes the legal and ethical responsibilities that AI tools cannot fulfill and underscores the need for human authors to take full responsibility for the content produced by AI tools. |
AI and authorship [41] | Levene’s study [41] focuses on the limitations of AI tools in terms of reliability and truthfulness. It asserts that AI tools cannot meet the criteria for authorship and backs the need for human authors to be fully responsible for AI-generated content. |
AI and fake papers [42] | This study discusses the use of AI in creating fake papers and highlights the need for improved means to detect fraudulent research. It implies the need for human judgment, in addition to the use of suitable software, to overcome these challenges. |
The challenge of AI chatbots for journal editors [43] | The guest editorial elaborates on the challenges that AI chatbots pose for journal editors, including issues with plagiarism detection. It suggests the application of human judgment and suitable software to overcome these challenges. |
Trustworthy AI for the future of publishing [44] | The COPE webinar offers a broader perspective on the ethical issues related to AI’s application in editorial publishing processes. It explores AI’s benefits in enhancing efficiency and accuracy, while also emphasizing key ethical concerns such as prejudice, fairness, accountability, and explainability. The webinar highlights the necessity for trustworthy AI in the publication process. |
AI tool | Description |
---|---|
GPTZero (GPTZero Inc; |
Offers clarity and transparency into the use of AI in the classroom, predicts whether a document was written by a large language model, provides AI-generated content detection in educational settings, assesses of AI’s role in creating educational materials. |
AI Text Classifier (OpenAI; |
Specialized for distinguishing between human and AI-written text, utilizing a fine-tuned GPT model. |
Academic AI Detector (PubGenius Inc; |
Specifically designed to identify AI-generated academic texts, plagiarism detection, academic integrity checks. |
Hive Moderation (Hive; |
Offers real-time identification and origin tracing of AI-generated content, detecting plagiarism, allowing them to enforce academic integrity, supports digital platforms in implementing site-wide bans on AI-generated media, and enables social platforms to create new filters to identify and tag AI-generated content. |
Copyleaks (Copyleaks Technologies Ltd; |
Scans the internet for potential plagiarism, available in multiple languages, academic integrity, copyright protection. |
Writer’s AI Content Detector (Writer Inc; |
Potential AI-generated content detection and authenticity checks. |
Crossplag AI Content Detector (Crossplag LLC; |
Combines AI detection with plagiarism checking for comprehensive content analysis. |
AI, artificial intelligence.
AI, artificial intelligence.
COPE, Committee on Publication Ethics; AI, artificial intelligence.
AI, artificial intelligence; GPT, generative pretrained transformer.