The Ongoing Development of AI-assisted Writing in ChatGPT’s Responses
- Since McCarthy et al. [1] first introduced the term “artificial intelligence (AI),” there have been numerous significant developments in the field. It all began with expert systems, evolved to the point where humans were defeated in games of Chess and Go, and eventually led to the emergence of large language models (LLMs). The key feature of these LLMs is their ability to outperform previous achievements and potentially replace human input in various domains. In this editorial, I aim to provide an overview of AI-assisted writing and discuss the policies regarding AI-assisted writing for popular journals, following a brief after introduction to the history of AI. The majority of the content is derived from responses by ChatGPT (GPT-3.5, OpenAI).
The History of AI
- Here is an explanation from ChatGPT about its history, in response to the query, “Please explain ‘history of AI’” (Fig. S1):
- The history of artificial intelligence (AI) is a fascinating journey that spans several decades and has seen significant milestones in technology, research, and development. The idea of creating machines capable of human-like intelligence and decision-making can be traced back to ancient times, but the modern history of AI begins in the mid-20th century. Here’s a summarized overview of the key milestones in the history of AI:
- 1. Early Concepts (1940s–1950s):
- • The term “artificial intelligence” was first coined by computer scientist John McCarthy in 1956 during the Dartmouth Conference, which is considered the birth of AI as a field of study.
- • In the 1940s, early pioneers like Alan Turing and others developed the concept of a “universal machine” that could simulate any human-designed computing machine.
- • The development of the first programmable digital computers during World War II laid the groundwork for AI research.
- 2. The Dartmouth Conference (1956):
- • John McCarthy, along with other prominent researchers, organized the Dartmouth Conference, which marked the official beginning of AI research as a separate field.
- • Participants at the conference were optimistic about the potential for creating intelligent machines within a few years, which turned out to be overly ambitious.
- 3. The “AI Winter” (1960s–1970s):
- • Despite initial enthusiasm, progress in AI research faced significant challenges, and the high expectations from the Dartmouth Conference were not met.
- • Funding for AI research reduced due to unfulfilled promises, leading to a period known as the “AI winter” characterized by a decline in AI research activities.
- 4. Expert Systems and Knowledge Representation (1980s):
- • In the 1980s, AI research saw a revival with the development of expert systems, which used knowledge representation and inference rules to mimic human expertise in specific domains.
- • Expert systems found practical applications in various fields, such as medicine and finance.
- 5. Machine Learning and Neural Networks (1990s):
- • In the 1990s, machine learning algorithms gained prominence, allowing AI systems to learn from data and improve their performance over time.
- • Neural networks, which mimic the structure of the human brain, saw renewed interest and contributed to significant advances in AI.
- 6. AI in the 21st Century:
- • The 21st century witnessed a surge in AI research and applications, fueled by the availability of large datasets and computational power.
- • Breakthroughs in machine learning, particularly with deep learning techniques, led to remarkable progress in areas like computer vision, natural language processing, and robotics.
- • AI applications became ubiquitous in various industries, including healthcare, finance, transportation, and entertainment.
- 7. Ethical and Societal Challenges:
- • The rapid advancement of AI has raised ethical and societal concerns, such as bias in AI algorithms, job displacement, privacy issues, and the implications of autonomous systems.
What are the Issues with AI-assisted Writing?
- As highlighted in the final section of the discussion, LLMs raise numerous concerns. Below is the answer to the query, “What are the problems of ‘AI assisted writing?’” (Fig. S2):
- As of my last update in September 2021, AI-assisted writing had some challenges and problems associated with it. These issues might still be relevant in 2023, but it’s also possible that advancements and improvements have addressed some of these concerns. Here are some of the problems associated with AI-assisted writing:
- 1. Lack of originality: AI can generate text based on patterns learned from vast amounts of existing data. While this can be useful, it might also lead to content that lacks originality and creativity, potentially producing generic or repetitive results.
- 2. Contextual understanding: AI models may struggle to grasp the full context and meaning of a text, leading to misinterpretations or inaccurate responses. This can be especially problematic for complex or nuanced writing tasks.
- 3. Biases in data: AI models learn from the data they are trained on, and if that data contains biases, the AI-assisted writing might perpetuate those biases. This can lead to discriminatory or prejudiced content being generated.
- 4. Ethical concerns: There are ethical considerations related to using AI to produce content without proper attribution or acknowledgment. Plagiarism and copyright issues may arise if AI-generated content is not appropriately handled.
- 5. Difficulty in creativity and critical thinking: AI is excellent at replicating patterns and information from existing data, but it may struggle with true creativity and critical thinking, which are essential for producing highquality, original content.
- 6. Inconsistency in quality: The quality of AI-generated writing can be inconsistent, and there may be a need for human intervention to ensure that the output meets the desired standards.
- 7. Complex writing tasks: While AI can excel in some writing tasks, such as generating straightforward articles or product descriptions, it may struggle with more complex forms of writing, such as poetry, creative storytelling, or technical manuals.
- 8. Over-reliance on AI: Over-reliance on AI-assisted writing can lead to a decline in human writing skills, as individuals might become dependent on AI tools rather than developing their language and communication abilities.
- 9. Legal concerns: There might be legal issues surrounding the ownership and copyright of content produced using AI, as well as potential liabilities arising from inaccuracies or errors in AI-generated content.
- 10. Data privacy and security: AI writing tools might require access to sensitive information, and there could be concerns regarding data privacy and security if the data is mishandled or misused.
- ChatGPT has identified all these points, they are entirely reasonable and are issues that all authors should consider.
What are the Policies Regarding AI-assisted Writing for Popular Journals?
- In light of the above points, I looked into how well-known journals are responding to ongoing advances in AI models. My next question was: “What are the policies on ‘AI assisted writing’ for popular journals?” (Fig. S3):
- As of my last update in September 2021, the specific policies on “AI-assisted writing” for popular journals may vary depending on the journal and its editorial guidelines. Since journals frequently update their policies and guidelines, it’s essential to refer to the individual journal’s website or contact their editorial office directly for the most up-to-date information.
- However, I can provide you with some general considerations and practices that might be relevant to AI-assisted writing in academic and scientific publishing:
- 1. Authorship and attribution: Journals typically require that authors take responsibility for the content they submit. If AI is used to generate parts of the manuscript, it should be appropriately acknowledged, and the authors should disclose the extent of AI assistance. This is crucial to ensure transparency and avoid potential issues related to plagiarism and intellectual property.
- 2. Originality and novelty: Journals often emphasize the importance of originality and novelty in submitted manuscripts. While AI can be a valuable tool to assist in writing, it should not compromise the novelty of the research or the authenticity of the findings presented in the manuscript.
- 3. Ethical considerations: Authors using AI-assisted writing should consider the ethical implications, including potential biases and implications for data privacy and security. Journals may require authors to address these ethical considerations in their manuscripts.
- 4. Peer review: AI-assisted manuscripts are subject to the same rigorous peer-review process as other submissions. Reviewers and editors will assess the quality, validity, and originality of the work, regardless of whether AI tools were used.
- 5. Data availability and code sharing: Some journals might require authors to make the AI-generated content, data, and code available to readers to promote transparency and reproducibility. This can be particularly relevant if AI algorithms play a significant role in the research.
- 6. Compliance with guidelines: Authors using AI should ensure that their AI-assisted writing complies with any relevant guidelines or regulations in their field, especially if the research involves sensitive data or specialized practices.
- 7. Language and readability: While AI can help with writing, it is essential that the manuscript adheres to the journal’s language requirements and maintains readability and clarity.
- Remember that each journal may have its own specific policies and guidelines on AI-assisted writing, so it’s crucial to check the journal’s “Instructions for Authors” or reach out to the editorial office for guidance when preparing and submitting a manuscript that involves AI-assisted writing.
- These answers appeared quite reasonable too. As advised, I searched for guidelines for each journal. Elsevier described authors’ responsibilities as follows [2]:
- Authors must disclose the use of generative AI and AI-assisted technologies in the writing process by adding a statement at the end of their manuscript in the core manuscript file, before the References list. The statement should be placed in a new section entitled ‘Declaration of Generative AI and Assisted technologies in the writing process’.
- Statement: During the preparation of this work the author(s) used [NAME TOOL/SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.
- This declaration does not apply to the use of basic tools for checking grammar, spelling, references etc. If there is nothing to disclose, there is no need to add a statement.
- There are also guidelines for the editor, which involve finding the disclosure statement at the end of the received manuscript and following the journal’s protocol for editing AI-assisted writing [3]. I also found related information in Nature, which contained the following brief description [4]:
- Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript. […] The fast-moving area of generative AI image creation has resulted in novel legal copyright and research integrity issues. As publishers, we strictly follow existing copyright law and best practices regarding publication ethics. While legal issues relating to AI-generated images and videos remain broadly unresolved, Springer Nature journals are unable to permit its use for publication.
The Evolving Issue of AI-assisted Writing
- Despite the limitations of the GPT-3.5 model, which is not the most recent one, but a noncommercial version, its content is reasonable and currently applicable. While LLMs are introducing new challenges, one can hope that they will bring their own solutions. As technology advances, so too do these AIassisted writing capabilities. However, this progress also introduces more concepts that we need to be aware of compared to before. Therefore, writing an AI-assisted research paper could potentially be more complex than writing one in a traditional manner. To bring about the future of academic writing, efforts to establish a general rule for AI-assisted writing are necessary, and a well-designed LLM model can facilitate this.
Declaration of Generative AI and Assistive Technologies in the Writing Process
- The primary content of this article consists of responses from the GPT-3.5 model. The remaining portions of the content have been deliberately translated from Korean to English by GPT-3.5 and was then edited by human editors.
Notes
-
Conflict of Interest
Jaegyun Park has been the Editor-in-Chief of Science Editing since 2023.
-
Funding
The author received no financial support for this article.
-
Data Availability
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
Supplementary Materials
Supplementary materials are available from https://doi.org/10.6087/kcse.318.
Fig. S1.
Answer of ChatGPT (ver. July 20, 2023; OpenAI) to the inquiry, “Please explain ‘history of AI’” (cited 2023 Jul 23).
kcse-318-Supplementary-S1.pdf
Fig. S2.
Answer of ChatGPT (ver. July 20, 2023; OpenAI; cited 2023 Jul 23) to the inquiry, “What are the problems of ‘AI assisted writing’?” (cited 2023 Jul 23).
kcse-318-Supplementary-S2.pdf
Fig. S3.
Answer of ChatGPT (ver. July 20, 2023; OpenAI) to the inquiry, “What are the policies on ‘AI assisted writing’ for popular journals?” (cited 2023 Jul 23).
kcse-318-Supplementary-S3.pdf
Citations
Citations to this article as recorded by
- Analyzing AI use policy in LIS: association with journal metrics and publisher volume
Eungi Kim
Scientometrics.2024;[Epub] CrossRef