Skip Navigation
Skip to contents

Science Editing : Science Editing

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > Sci Ed > Volume 13(1); 2026 > Article
Training Material
Introduction to generative artificial intelligence tools for academic article writing
Joon Seo Limorcid
Science Editing 2026;13(1):58-62.
DOI: https://doi.org/10.6087/kcse.393
Published online: February 2, 2026

Clinical Research Center, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea

Correspondence to Joon Seo Lim joonseolim@amc.seoul.kr
• Received: January 20, 2026   • Accepted: January 27, 2026

Copyright © 2026 Korean Council of Science Editors

This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

prev next
  • 529 Views
  • 26 Download
  • 1 Crossref
  • Since the advent of ChatGPT, researchers have rapidly adopted generative artificial intelligence (AI) for academic work, with monthly use reported by 69.4% of natural science researchers and 51.2% of medical researchers. This educational article surveys AI tools for literature search and trend analysis, study-oriented article organization, and manuscript drafting and editing, while emphasizing that these tools complement—not replace—critical reading and standard database searches. For discovery and mapping, Research Rabbit and Connected Papers visualize related papers through citation links or content similarity, while Consensus summarizes the direction and strength of evidence addressing a focused research question. Elicit and SciSpace can extract methods and conclusions into structured tabular summaries to support scoping and gap identification, and STORM can generate knowledge maps for topic exploration; Liner offers research agents to support hypothesis generation and literature review. To extend reference-management workflows, the article proposes downloading relevant PDFs, uploading them to a large language model, extracting predefined fields (e.g., design, participants, interventions, outcomes, key statistics, limitations, and DOI) into a CSV file, and importing the output into a Notion database for tagging and tracking reading status. For writing support, SciSpace and Liner provide outline generation, citation assistance, and peer review style checks, whereas Paperpal, Wordvice.ai, and DeepL focus on grammar, paraphrasing, and translation, and Scite contextualizes citations by identifying whether they are supporting or contrasting. Key cautions include manual verification of AI outputs, awareness of English-language bias, avoidance of reliance on a single tool, and protection of manuscript confidentiality; authors must disclose AI use and remain accountable for accuracy. When used judiciously, these tools can streamline screening, summarization, and revision without eroding scholarly judgment.
Background
Since the advent of ChatGPT (OpenAI), the use of artificial intelligence (AI) tools among researchers has increased rapidly. According to a recent survey, 69.4% of researchers in the natural sciences and 51.2% of those in the medical field use generative AI for academic purposes at least once a month [1]. Large language models (LLMs) such as ChatGPT and Claude (Anthropic) are particularly popular for article writing. Modern AI tools extend beyond simple text generation and now offer advanced capabilities, including literature network analysis, identification of research gaps, and prediction of emerging research trends. These tools are evolving at such a rapid pace that new platforms and functionalities continue to appear on a monthly basis.
Objectives
This article aims to introduce AI tools that can be applied across multiple stages of the research process. The first topic addresses tools for literature search, understanding research trends, and topic selection. The second topic focuses on approaches to organizing articles in ways that complement traditional reference management software. The third topic focuses on AI tools that support manuscript writing.
The fundamental approach to literature search and trend analysis involves entering keywords into databases such as Google Scholar or PubMed, reviewing titles to identify relevant articles, downloading and thoroughly reading key papers, and exploring additional articles cited within them. Although this method is time-consuming, it helps develop skills in information retrieval and critical evaluation and often leads to unexpected discoveries. Therefore, this approach remains an essential practice, and the AI tools introduced below are intended to complement, rather than replace, traditional search strategies. Nevertheless, because identifying other highly relevant articles can be challenging, a variety of AI-based tools have been developed to assist researchers in this process.
Research Rabbit
Research Rabbit (https://researchrabbitapp.com) visualizes relationships among articles by using arrows to indicate citation direction, thereby facilitating an intuitive understanding of how research evolves over time. By entering a DOI (digital object identifier) or article title, users can automatically generate a citation graph. Filters such as “Similar Work” and “Earlier/Later Work” allow exploration of connections based on topical similarity or publication chronology. The tool also links articles that are not directly cited, which are often overlooked by traditional search engines, providing a more comprehensive overview of research trends. In addition, Research Rabbit integrates with reference managers such as Zotero (Corporation for Digital Scholarship) and is freely available.
Connected Papers
Connected Papers (https://researchrabbitapp.com) generates visual graphs based on content similarity rather than citation relationships. Node size corresponds to citation count, while color represents publication year, making the tool particularly useful for visualizing research trends. Similar to Research Rabbit, entering an article title or DOI produces a map of approximately 40 related articles, which can be expanded using the “Prior Works” or “Derivative Works” options. It draws on databases such as PubMed and Semantic Scholar, enabling trend analysis beyond a user’s immediate specialty and supporting identification of potential research gaps. The free version allows up to five graphs per month, while unlimited graphs are available for US $3 per month.
Consensus
Consensus (https://consensus.app) uses AI to summarize consensus across scientific articles. When a researcher enters a specific research question, the system analyzes thousands of studies to generate conclusions, along with consensus metrics such as the number of studies assessed and the level of agreement, and provides links to source articles. It enables rapid identification of prevailing research trends and gaps and is particularly useful for meta-analyses and literature reviews. However, to verify the accuracy of AI-generated summaries, consultation of original key articles remains essential. The free version limits summaries to 10 articles per response, while the Pro version restricts Pro searches to 25 and Deep Searches to 3 uses per month. The Pro subscription, priced at US $10 per month, offers unlimited Pro searches and up to 15 Deep Searches per month.
Elicit
Elicit (https://elicit.com) is an AI-powered tool that searches academic article databases and generates structured summary tables by extracting and organizing methodologies and conclusions from relevant studies, thereby supporting topic selection and scoping. Queries produce automatically generated tabular results, and the “Extract” feature allows customization of table columns (e.g., study design). Elicit can analyze large numbers of articles simultaneously to identify patterns and trends and supports CSV export to facilitate efficient data management. Because it is based on Semantic Scholar (https://www.semanticscholar.org), which may be less familiar to some biomedical researchers, cross-checking results with PubMed is recommended. The free version limits summaries to four articles and data extraction to 20 articles per month. The Pro version (US $12 per month) increases the number of summaries to eight per article and allows data extraction from up to 600 articles per year.
OpenEvidence
Developed in partnership with NEJM, JAMA Network, the US National Comprehensive Cancer Network, and the Mayo Clinic Platform, OpenEvidence (https://www.openevidence.com) is a chatbot-based AI service specialized for medical inquiries and related tasks. It has also established partnerships with various academic societies, including the American College of Cardiology and the American Diabetes Association. OpenEvidence provides citations for every response, and hallucinations in the form of fabricated citations are rare. However, because the references it retrieves tend to be skewed toward high-impact journals and randomized clinical trials, it is particularly useful for medical consultation but may be somewhat insufficient for identifying the broader range of references required for manuscript preparation. Therefore, it is recommended that OpenEvidence be used in combination with other tools. Registration is limited to healthcare professionals, for whom the service is available free of charge.
SciSpace
SciSpace (https://scispace.com) offers a comprehensive suite of AI research tools, including an AI writer for manuscript assistance, Literature Review, Topic Finder, Paraphraser, Citation Generator, and Data Extractor. For trend analysis, the Literature Review feature summarizes the top five articles on a given topic and provides a table illustrating how an additional 100 articles address the research query. Most features are available in the free version, albeit with certain limitations, while a premium subscription priced at US $20 per month unlocks full access to all AI tools without usage restrictions.
STORM
STORM (https://storm.genie.stanford.edu), an open-source tool developed by Stanford University, generates Wikipedia-style knowledge maps from user-defined input topics, systematically organizing research trends and identifying potential gaps. It relies on Google Scholar to support comprehensive trend analysis and can assist researchers in topic selection. Although the tool is free to use, it may experience slow response times and occasional site instability.
Liner
Developed by a Korean company, Liner (https://liner.com) integrates web-based and academic article searches to provide answers and support hypothesis generation. It offers chat-based applications for basic information retrieval and trend analysis, as well as a variety of AI research agents similar to those available in SciSpace. These agents support hypothesis generation and evaluation, citation recommendations, literature reviews, research flow exploration, survey simulations, and peer review processes. New features are added on a frequent basis. The free version provides access to these core functions, while a subscription costing US $17.99 per month offers additional agent credits and up to 25 deep research sessions.
Precautions
First, researchers should always manually verify the accuracy of AI-generated information, particularly citation details and statistical results, by consulting original source materials. Second, because most tools are optimized for English-language publications, research published in other languages may be underrepresented. Third, reliance on a single tool should be avoided; instead, using multiple tools in complementary ways can produce a more comprehensive and balanced literature review. Finally, it is essential to remember that AI tools assist but do not replace a researcher’s judgment. Continued engagement with traditional literature searches through PubMed and Google Scholar is necessary to maintain critical appraisal skills.
Reference management programs such as EndNote (Clarivate), Mendeley (Elsevier), and Zotero automatically extract bibliographic information (e.g., title, journal, DOI) from PDF files. However, the author sought more than bibliographic organization alone and instead aimed to create a study-oriented summary system. This approach involves organizing research designs, drugs, and statistical methods used in relevant articles, along with their reported limitations and proposed future directions. Although some generative AI tools (e.g., SciSpace) offer related functionality, they may have limited flexibility or involve additional costs. Therefore, the author recommends an alternative method that combines ChatGPT with Notion (Notion Labs), a note-taking and document-organization application.
The method is straightforward: first, PDFs of relevant articles are downloaded and uploaded to ChatGPT for information extraction, after which the output is generated as a CSV file and pasted into a Notion database designed to manage research articles. An example prompt used with ChatGPT is as follows:
Extract the following information from the PDF files for database generation. I need the data presented in a horizontal tabular format so that I can easily paste it into my Notion database: article title; journal name; summarized conclusion; names of first author(s); names of corresponding author(s); main institution(s) where the study was conducted; study setting (single center, multicenter, multinational, etc.); study design (randomized controlled trial, cohort, case-control, in vivo, in vitro, etc.); patient information (number, median age, sex distribution, etc.); main drugs or procedures used; key statistical analyses (propensity score matching, Kaplan-Meier, etc.); primary outcomes; summarized limitations; future study directions; and DOI. Provide the final output as a CSV file.
This prompt can be modified as needed. After creating a Notion account, users can visit the author’s Notion article database in read-only format (https://charm-serpent-80c.notion.site/DB-1edae5f4278c805a988fe9207061b329). By clicking the “Duplicate” button in the top-right corner, the database can be copied into a personal Notion account, where it can then be freely edited.
Detailed instructions are as follows: PDFs are uploaded to ChatGPT (the free version is sufficient) using the prompt described above, after which ChatGPT outputs the extracted fields in CSV format, which can be copied into a spreadsheet and then pasted/imported into Notion. The file should be opened in a spreadsheet application, and the data copied from the Notion article database template. By clicking the button corresponding to row creation as many times as the number of articles to be added, new rows are generated. The corresponding empty cells in these rows are then selected, and the copied data are pasted, after which the information is automatically populated.
The database includes the columns “Status” and “Tags,” which are located to the left of the “Article Title” column. The “Status” column tracks reading progress (e.g., unread, reading, read, referenced), while the “Tags” column allows articles to be organized using keywords. Articles from different fields can be grouped according to shared tags. Although tags can be generated using ChatGPT or Notion’s built-in AI, it is recommended that researchers read the articles themselves and assign tags manually to promote deeper understanding. Excessive automation may reduce information retention. In addition, although hallucinations are uncommon when extracting and summarizing content from specific documents, the accuracy of summaries should always be verified.
This method enables efficient organization and in-depth study of a large number of articles, thereby facilitating appropriate and timely referencing when needed.
Generative AI tools for research article writing are proliferating rapidly. Beyond tools designed for literature search, these services support manuscript drafting, citation recommendation, and paraphrasing, including assistance aimed at reducing the risk of self-plagiarism. These agents or AI tools should only be used to review the user’s own manuscripts. Most journals prohibit uploading manuscripts for external peer review, and some restrict sharing unpublished work with third-party services. It is necessary to ensure that uploaded manuscripts will not be used for training purposes. When uncertainty exists, AI use should be limited to English editing or literature searches.
SciSpace
SciSpace offers an AI writer that generates outlines and manuscript structures based on research topics, along with a citation generator that recommends appropriate references for the text. Its Agent Gallery includes a range of AI agents tailored to different research stages and academic fields. Most features are available in the free version with certain limitations, while a premium subscription priced at US $20 per month unlocks unrestricted access to all features.
Liner
Liner provides tools similar to those offered by SciSpace, including citation recommendations, but it also uniquely offers a peer review agent. This agent identifies logical inconsistencies or insufficient explanations within manuscripts, making it particularly valuable for manuscript refinement.
Paperpal, Wordvice.ai, and DeepL
Paperpal (https://paperpal.com) and Wordvice.ai (https://wordvice.ai) focus on specialize in language-focused functions, including grammar checking, proofreading, paraphrasing, and translation. DeepL (https://deepl.com), launched around 2017 as a machine translation service, is fast and relatively accurate for academic use and now also offers DeepL Write for English editing. All three platforms provide limited free usage, while subscription plans costing approximately US $6 to $10 per month offer near-unlimited access.
Scite
Scite (https://scite.ai) is an AI tool specialized in citation analysis and is partnered with publishers such as PNAS, Wiley, and SAGE. It recommends citations, evaluates reference relevance, and identifies whether cited articles support or contradict specific claims or hypotheses. The platform offers a 7-day free trial, after which access costs approximately US $10 per month.
ChatGPT, Claude, and Gemini
If researchers are proficient in using versatile LLMs, they can achieve results comparable to those produced by the specialized tools—ChatGPT (https://chatgpt.com), Claude (https://claude.ai/), and Gemini (Google; https://gemini.google.com/)—by employing appropriate prompts. Because LLMs are not directly connected to academic databases, hallucinations are common in citation recommendations. Nevertheless, they remain useful for manuscript drafting, English editing, and peer review. When using LLMs for writing, it is essential to provide sufficient research context; for example:
You are an experienced researcher and scientific writer in [FIELD, e.g., cardiology, molecular oncology, health services research]. I am working on a manuscript about [1–2 sentence summary of the study]. Study design: [randomized controlled trial, retrospective cohort, bench experiment, etc.]. Target journal: [JOURNAL NAME or ‘a Q1 clinical journal in [SUBFIELD]’]. Writing goals: [e.g., clear, concise, IMRAD structure, 3,000-word limit, suitable for clinicians]. When helping me, please adhere to the following rules: (1) Never invent data, methods or references; (2) Use only the information I provide about my study; (3) Ask clarification questions if something is ambiguous. Please acknowledge this setup, then briefly restate your understanding of my project and how you will assist.
For peer review of the research, an example prompt used by NEJM AI’s human+AI review system is as follows [2]:
You are serving as an expert reviewer for NEJM AI, an interdisciplinary journal dedicated to rigorously evaluating applications of AI in medicine. Please review the attached manuscript and provide a concise review, including the following sections.
1. Summary: Provide a brief description of the study in approximately 4–5 sentences, highlighting its clinical relevance, strengths and limitations, methodological rigor and novelty.
2. Major Comments: This section should include 5–10 numbered and prioritized limitations of the study, starting with the most significant. Where possible, provide constructive suggestions to the authors to address each limitation. It is not necessary to offer suggestions for every limitation.
3. Minor Comments
4. Overall Assessment: Write concisely and professionally, adopting the tone of a rigorous, yet constructive peer review. After completing the author review, provide brief comments to the NEJM AI editors, including a final recommendation. Encourage publication with Minor Revisions; Encourage Publication with Major Revisions; or Discourage Publication (not suitable for NEJM AI).
Regardless of which AI tools are used, the requirements for article submission remain unchanged. Authors must disclose which AI tools were used in the cover letter and manuscript, typically in the methods section or acknowledgments, and they bear full responsibility for the accuracy of the content. All materials should be carefully reviewed before submission. Researchers must continue to develop their own expertise to ensure that human judgment remains the gold standard throughout all stages of research. When these conditions are met, research workflows and outcomes can become noticeably more efficient and smoother than in the pre-AI era.

Conflict of interest

No potential conflict of interest relevant to this article was reported.

Funding

No financial support was received for this work.

Data availability

Date sharing is not applicable as no new data were created or analyzed in this article.

Supplementary materia

No supplementary materials were provided for this article.

Figure & Data

References

    Citations

    Citations to this article as recorded by  
    • How editors perceive the use of generative artificial intelligence in writing academic papers: a narrative review
      Sun Huh
      Journal of the Korean Medical Association.2026; 69(2): 111.     CrossRef

    Introduction to generative artificial intelligence tools for academic article writing
    Introduction to generative artificial intelligence tools for academic article writing

    Science Editing : Science Editing
    TOP