This case study investigated changes in research articles from the Korea Research Institute of Bioscience and Biotechnology (KRIBB) during the COVID-19 pandemic to share information with stakeholders in the research and publishing communities. Data on research published from 2017 to 2024 were collected by searching the database for the number of research articles indexed in Web of Science’s Science Citation Index Expanded (SCIE), and then extracting the publication date of research articles from the KRIBB’s paper management system. After the number of WoS-SCIE research articles was scaled down by the corresponding number of KRIBB’s SCIE articles in 2017, we analyzed differences in the publication turnaround times of KRIBB’s research articles based on whether MDPI was involved. In both WoS-SCIE and KRIBB data, the impact of MDPI exhibited a clear decline in 2023, a trend that continued into 2024. Generally, KRIBB’s non-MDPI research articles were published more rapidly in high-frequency journals, journals with low impact factors, and for COVID-19–related topics; however, this difference gradually diminished. In 2023, there was a notable reversal from a decrease to an increase in publication speed following COVID-19, along with a narrowing of the gaps between different stages of publication. It remains uncertain whether this trend will continue. Collecting additional similar case studies could provide a more accurate understanding of the changes and trends in the article publishing industry during the COVID-19 period.
Purpose Retraction of published literature is an increasingly important mechanism for protecting the scholarly record in today’s accelerated publishing environment. Analyzing retracted articles offers unique insights into how research communities maintain academic integrity. Taiwan is a major contributor to global medical research and has sustained public and media interest in academic integrity. Yet, no comprehensive analysis of retractions involving Taiwan-affiliated authors has been conducted. This paper therefore aimed to systematically examine retractions in Taiwanese medical research.
Methods Data extracted from both PubMed and the Retraction Watch Database were analyzed to determine the number of retracted articles and their reasons for retraction.
Results In total, 181 retractions of medical research articles with at least one Taiwan-affiliated author were included in the analysis, with the number of retractions steadily increasing since the first retracted article was published in 1992. Taiwanese medical research has the 9th highest retraction rate among the top 21 countries in medical research publications (6.08 retractions per 10,000 publications). However, this rate is lower than those of other highly productive Asian countries, including China, Korea, Japan, and India. Fifty-eight (32.04%) of the retractions involved international collaboration, most commonly with authors affiliated with the United States and China. Over the past 33 years, the reasons for retraction have gradually shifted from plagiarism or data manipulation to compromised peer review systems, ethical issues, and authorship disputes.
Conclusion The results reveal that retractions in Taiwanese medical research are evolving and distinct from those in neighboring regions. This finding highlights the need to examine Taiwanese medical researchers’ perspectives on academic integrity and current publishing trends.
Purpose The peer review process is essential for maintaining the quality of scientific publications. However, identifying reviewers who possess the necessary expertise can be challenging. In Open Journal Systems (OJS), which is commonly utilized by journals, the most effective method of inviting reviewers is when they are already registered in the system. This study seeks to improve the efficiency and accuracy of the reviewer selection process to ensure high-quality peer reviews.
Methods We introduced a process innovation to analyze users within OJS and obtain recommendations for potential reviewers possessing the relevant expertise for the manuscript under review. This study collected user data from OJS as potential reviewers and utilized information from the Scopus search application programming interface (API). We extracted authors’ data from the Scopus API to obtain their Scopus IDs, which were then used to scrape publication data of potential reviewers. The system matched the previous works of reviewers with the title and abstract of the manuscript using term frequency-inverse document frequency and cosine similarity algorithms.
Results The system was evaluated by comparing its recommendations with the assessments made by the editorial team. This evaluation yielded precision, mean average precision, and mean reciprocal rank values of 0.47, 0.77, and 0.87, respectively.
Conclusion The results clearly demonstrate the system’s ability to provide relevant reviewer recommendations. This system offers significant benefits by assisting editors in identifying suitable reviewer candidates from the existing user database in OJS, particularly for the evaluation of manuscripts.
Purpose In recent years, the number of retractions in biomedical literature has increased. Analyses of retracted publications can provide important information on the characteristics of retractions and may help reduce this trend. This study aimed to systematically analyze the time, source, citations, and reasons for retraction of pediatric research papers.
Methods A systematic review of retracted articles related to pediatrics was performed in PubMed and Web of Science databases from their inception through December 31, 2023. Excluded from the review were articles unrelated to pediatric studies, conference proceedings, non-English articles, duplicates, and articles that could not be identified. The data extracted and analyzed included the title, publication year, retraction year, country, journal, impact factor, the party who raised the retraction, the reason for retraction, citation count, and the authors of the articles.
Results The interval between publication and retraction ranged from 0 to 45 years, and the number of retracted papers peaked in 2023. China and the United States had the most retractions, and China had the highest rate of retraction. The proportion of retractions from China increased over time. Several journals published by Hindawi had many retractions compared to other journals. The most frequent reasons were publication issues, errors, and fraud/fabrication.
Conclusion This study provides a comprehensive overview of retracted articles in pediatric research. Our findings suggest that it is important to scrutinize the process of research and publication, to identify and counter research misconduct, and to make the instructions, procedures, and outcomes of publication more transparent for researchers, publishers and regulators.
While generative artificial intelligence (AI) technology has become increasingly competitive since OpenAI introduced ChatGPT, its widespread use poses significant ethical challenges in research. Excessive reliance on tools like ChatGPT may intensify ethical concerns in scholarly articles. Therefore, this article aims to provide a comprehensive narrative review of the ethical issues associated with using AI in academic writing and to inform researchers of current trends. Our methodology involved a detailed examination of literature on ChatGPT and related research trends. We conducted searches in major databases to identify additional relevant articles and cited literature, from which we collected and analyzed papers. We identified major issues from the literature, categorized into problems faced by authors using nonacademic AI platforms in writing and challenges related to the detection and acceptance of AI-generated content by reviewers and editors. We explored eight specific ethical problems highlighted by authors and reviewers and conducted a thorough review of five key topics in research ethics. Given that nonacademic AI platforms like ChatGPT often do not disclose their training data sources, there is a substantial risk of unattributed content and plagiarism. Therefore, researchers must verify the accuracy and authenticity of AI-generated content before incorporating it into their article, ensuring adherence to principles of research integrity and ethics, including avoidance of fabrication, falsification, and plagiarism.
Citations
Citations to this article as recorded by
Generative Artificial Intelligence Tools in Journal Article Preparation: A Preliminary Catalog of Ethical Considerations, Opportunities, and Pitfalls Robin R. White JDS Communications.2025;[Epub] CrossRef
Ethics For Responsible Data Research: Integrating Cybersecurity Perspectives In Digital Era Sheetal Temara SSRN Electronic Journal.2025;[Epub] CrossRef
How is ChatGPT acknowledged in academic publications? Kayvan Kousha Scientometrics.2024; 129(12): 7959. CrossRef
Appliances of Generative AI-Powered Language Tools in Academic Writing: A Scoping Review Lilia Raitskaya, Elena Tikhonova Journal of Language and Education.2024; 10(4): 5. CrossRef
Purpose The evolving landscape of nursing research emphasizes inclusive representation. The International Committee of Medical Journal Editors (ICMJE) has established guidelines to ensure the fair representation of various demographic variables, including age, sex, and ethnicity. This study aimed to evaluate the adherence of nursing journals indexed in MEDLINE or PubMed Central to the ICMJE’s directives on gender equity, given that journals indexed in MEDLINE and PubMed Central typically adhere to the ICMJE’s guidelines.
Methods A descriptive literature review methodology was employed to analyze 160 nursing journals listed in two databases as of July 28, 2023. The website of each journal was searched, and the most recent original article from each was selected. These articles were then evaluated for their alignment with the ICMJE guidelines on gender equity. Descriptive statistics were applied to categorize and enumerate the cases.
Results Of the articles reviewed from 160 journals, 115 dealt with human populations. Of these, 93 required a description of gender equity. Within this subset, 83 articles distinguished between the genders of human subjects. Gender-based interpretations were provided in 15 articles, while another 68 did not offer an interpretation of differences by gender. Among the 10 articles that did not delineate gender, only two provided a rationale for this omission.
Conclusion Among recent articles published in the nursing journals indexed in MEDLINE and PubMed Central, only 16.1% presented clear gender analyses. These findings highlight the need for editors to strengthen their dedication to gender equity within their editorial policies.
Citations
Citations to this article as recorded by
Academic journal website from the user’s perspective A. V. Silnichaya, D. I. Trushkov, A. Volkova, M. S. Konyaev Science Editor and Publisher.2024; 9(1): 2. CrossRef
Artificial intelligence (AI)-powered chatbots are rapidly supplanting human-derived scholarly work in the fast-paced digital age. This necessitates a re-evaluation of our traditional research and publication ethics, which is the focus of this article. We explore the ethical issues that arise when AI chatbots are employed in research and publication. We critically examine the attribution of academic work, strategies for preventing plagiarism, the trustworthiness of AI-generated content, and the integration of empathy into these systems. Current approaches to ethical education, in our opinion, fall short of appropriately addressing these problems. We propose comprehensive initiatives to tackle these emerging ethical concerns. This review also examines the limitations of current chatbot detectors, underscoring the necessity for more sophisticated technology to safeguard academic integrity. The incorporation of AI and chatbots into the research environment is set to transform the way we approach scholarly inquiries. However, our study emphasizes the importance of employing these tools ethically within research and academia. As we move forward, it is of the utmost importance to concentrate on creating robust, flexible strategies and establishing comprehensive regulations that effectively align these potential technological developments with stringent ethical standards. We believe that this is an essential measure to ensure that the advancement of AI chatbots significantly augments the value of scholarly research activities, including publications, rather than introducing potential ethical quandaries.
Citations
Citations to this article as recorded by
Generative AI, Research Ethics, and Higher Education Research: Insights from a Scientometric Analysis Saba Mansoor Qadhi, Ahmed Alduais, Youmen Chaaban, Majeda Khraisheh Information.2024; 15(6): 325. CrossRef
Publication Ethics in the Era of Artificial Intelligence Zafer Kocak Journal of Korean Medical Science.2024;[Epub] CrossRef
Exploring the Impact of Artificial Intelligence on Research Ethics - A Systematic Review Gabriel Andrade-Hidalgo, Pedro Mio-Cango, Orlando Iparraguirre-Villanueva Journal of Academic Ethics.2024;[Epub] CrossRef
Technological advances have been an integral part of discussions related to journal publishing in recent years. This article presents Get Full Text Research (GetFTR), a discovery solution launched by five major publishers: the American Chemical Society, Elsevier, Springer Nature, Taylor & Francis Group, and Wiley. These founding publishers announced the development of this new solution in 2019, and its pilot service was launched just 4 months later. The GetFTR solutions streamlines access to not only open access resources but also to subscription-based resources. The publishers have assured that this solution will be beneficial for all relevant stakeholders involved in the journal publication process, including publishers, researchers, integrators, and libraries. They highlighted that researchers will have the ability to access published articles with minimal effort or steps, benefitting from existing (single sign-on) access technologies, ideally accessing the article PDF with a single click. While GetFTR is free for integrators and researchers, publishers are required to pay an annual subscription fee. To lower the barrier for participation, GetFTR supports smaller publishers by offering them a discount based on the number of digital object identifiers (DOIs), as recorded in Crossref data. While this project appears promising, some initial concerns were raised, particularly regarding user data control, which the project has responded to by more closely engaging the librarian community and by providing further information on how GetFTR supports user privacy.
As a community, it is impossible to ignore the fact that sharing research and information related to research is a much broader proposition than sharing an article, book, or conference paper. In supporting an evolving scholarly record, making connections between research organizations, contributors, actions, and objects helps give a more complete picture of the scholarly record, which open infrastructure organizations like Crossref call the research nexus. Crossref is working to support this evolution and is thinking about the metadata it collects via its members and that it supplements and curates, to make it broader than the rigid structures traditionally provided by content types. Furthermore, because of Crossref’s commitment to the Principles of Open Scholarly Infrastructure (POSI), this network of information will be global and openly available for anyone in the community to access and reuse. The present article describes this vision in more detail, including why it is increasingly important to support the links between research and elements that contribute or are related to that research; how Crossref, its members, and the wider community can support it; and the work and planning Crossref is doing to make it easier to achieve this.
Citations
Citations to this article as recorded by
Journal of Educational Evaluation for Health Professions received the top-ranking Journal Impact Factor―9.3—in the category of Education, Scientific Disciplines in the 2023 Journal Citation Ranking by Clarivate Sun Huh Journal of Educational Evaluation for Health Professions.2024; 21: 16. CrossRef
Purpose Given the impact of information technologies, the research environment for humanities scholars is transforming into digital scholarship. This study presents a foundational investigation for developing digital scholarship (DS) research support services. It also proposes a plan for sustainable information services through examining the current status of DS in Korea, as well as accessing, processing, implementing, disseminating, and preserving interdisciplinary digital data.
Methods Qualitative interview data were collected from September 7 to 11, 2020. The interviews were conducted with scholars at the research director level who had participated in the DS research project in Korea. Data were coded using Nvivo 14, and cross-analysis was performed among researchers to extract central nodes and derive service elements.
Results This study divided DS into five stages: research plan, research implementation, publishing results, dissemination of research results, and preservation and reuse. This paper also presents the library DS information services required for each stage. The characteristic features of the DS research cycle are the importance of collaboration, converting analog resources to data, data modeling and technical support for the analysis process, humanities data curation, drafting a research data management plan, and international collaboration.
Conclusion Libraries should develop services based on open science and data management plan policies. Examples include a DS project liaison service, data management, datafication, digital publication repositories, a digital preservation plan, and a web archiving service. Data sharing for humanities research resources made possible through international collaboration will contribute to the expansion of new digital culture research.
Systematic reviews and meta-analyses have become central in many research fields, particularly medicine. They offer the highest level of evidence in evidence-based medicine and support the development and revision of clinical practice guidelines, which offer recommendations for clinicians caring for patients with specific diseases and conditions. This review summarizes the concepts of systematic reviews and meta-analyses and provides guidance on reviewing and assessing such papers. A systematic review refers to a review of a research question that uses explicit and systematic methods to identify, select, and critically appraise relevant research. In contrast, a meta-analysis is a quantitative statistical analysis that combines individual results on the same research question to estimate the common or mean effect. Conducting a meta-analysis involves defining a research topic, selecting a study design, searching literature in electronic databases, selecting relevant studies, and conducting the analysis. One can assess the findings of a meta-analysis by interpreting a forest plot and a funnel plot and by examining heterogeneity. When reviewing systematic reviews and meta-analyses, several essential points must be considered, including the originality and significance of the work, the comprehensiveness of the database search, the selection of studies based on inclusion and exclusion criteria, subgroup analyses by various factors, and the interpretation of the results based on the levels of evidence. This review will provide readers with helpful guidance to help them read, understand, and evaluate these articles.
The specialized literature abounds in recommendations about the most desirable technical ways of answering reviewers’ comments on a submitted manuscript. However, not all publications mention authors’ and/or reviewers’ feelings or reactions about what they may read or write in their respective reports, and even fewer publications tackle openly what may or may not be said in a set of answers to a reviewer’s comments. In answering reviewers’ comments, authors are often attentive to the technical or rational aspects of the task but might forget some of its relational aspects. In their answers, authors are expected to make every effort to abide by reviewers’ suggestions, including discussing major criticisms, editing the illustrations, or implementing minor corrections; abstain from questioning a reviewer’s competence or willingness to write a good review, including full and attentive reading and drafting useful comments; clearly separate their answers to each reviewer; avoid skipping, merging, or reordering reviewers’ comments; and, finally, specify the changes made. Authors are advised to call on facts, logic, and some diplomacy, but never on artifice, concealment, or flattery. Failing to do so erodes the trust between authors and reviewers, whereas integrity is expected and highly valued. The guiding principle should always be honesty.
Purpose This study explored to what extent and how researchers in five Korean government research institutes that implement research data management practices share their research data and investigated the challenges they perceive regarding data sharing.
Methods The study collected survey data from 224 respondents by posting a link to a SurveyMonkey questionnaire on the homepage of each of the five research institutes from June 15 to 29, 2022. Descriptive statistical analyses were conducted.
Results Among 148 respondents with data sharing experience, the majority had shared some or most of their data. Restricted data sharing within a project was more common than sharing data with outside researchers on request or making data publicly available. Sharing data directly with researchers who asked was the most common method of data sharing, while sharing data via institutional repositories was the second most common method. The most frequently cited factors impeding data sharing included the time and effort required to organize data, concerns about copyright or ownership of data, lack of recognition and reward, and concerns about data containing sensitive information.
Conclusion Researchers need ongoing training and support on making decisions about access to data, which are nuanced rather than binary. Research institutes’ commitment to developing and maintaining institutional data repositories is also important to facilitate data sharing. To address barriers to data sharing, it is necessary to implement research data management services that help reduce effort and mitigate concerns about legal issues. Possible incentives for researchers who share data should also continue to be explored.
Citations
Citations to this article as recorded by
Korean scholarly journal editors’ and publishers’ attitudes towards journal data sharing policies and data papers (2023): a survey-based descriptive study Hyun Jun Yi, Youngim Jung, Hyekyong Hwang, Sung-Nam Cho Science Editing.2023; 10(2): 141. CrossRef
Data sharing and data governance in sub-Saharan Africa: Perspectives from researchers and scientists engaged in data-intensive research Siti M. Kabanda, Nezerith Cengiz, Kanshukan Rajaratnam, Bruce W. Watson, Qunita Brown, Tonya M. Esterhuizen, Keymanthri Moodley South African Journal of Science.2023;[Epub] CrossRef
Identifying key factors and actions: Initial steps in the Open Science Policy Design and Implementation Process Hanna Shmagun, Jangsup Shim, Jaesoo Kim, Kwang-Nam Choi, Charles Oppenheim Journal of Information Science.2023;[Epub] CrossRef
Purpose This study aimed to examine the following overarching issues: the current status of research and publication ethics training conducted in Korean academic organizations and what needs to be done to reinforce research and publication ethics training.
Methods A survey with 12 items was examined in a pilot survey, followed by a main survey that was distributed to 2,487 academic organizations. A second survey, which contained six additional questions, was dispatched to the same subjects. The results of each survey were analyzed by descriptive statistical analysis, content analysis, and comparative analysis.
Results More than half of the academic organizations provided research and publication ethics training programs, with humanities and social sciences organizations giving more training than the others (χ2=11.190, df=2, P=0.004). The results showed that research and publication ethics training was held mostly once and less than an hour per year, mainly in a lecture format. No significant difference was found in the training content among academic fields. The academic organizations preferred case-based discussion training methods and wanted expert instructors who could give tailored training with examples.
Conclusion A systematic training program that can develop ethics instructors tailored to specific academic fields and financial support from academic organizations can help scholarly editors resolve the apparent gap between the real and the ideal in ethics training, and ultimately to achieve the competency needed to train their own experts.
Citations
Citations to this article as recorded by
Influence of artificial intelligence and chatbots on research integrity and publication ethics Payam Hosseinzadeh Kasani, Kee Hyun Cho, Jae-Won Jang, Cheol-Heui Yun Science Editing.2024; 11(1): 12. CrossRef
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.
Citations
Citations to this article as recorded by
ChatGPT: More Than a “Weapon of Mass Deception” Ethical Challenges and Responses from the Human-Centered Artificial Intelligence (HCAI) Perspective Alejo José G. Sison, Marco Tulio Daza, Roberto Gozalo-Brizuela, Eduardo C. Garrido-Merchán International Journal of Human–Computer Interaction.2024; 40(17): 4853. CrossRef
The ethics of ChatGPT – Exploring the ethical issues of an emerging technology Bernd Carsten Stahl, Damian Eke International Journal of Information Management.2024; 74: 102700. CrossRef
ChatGPT in healthcare: A taxonomy and systematic review Jianning Li, Amin Dada, Behrus Puladi, Jens Kleesiek, Jan Egger Computer Methods and Programs in Biomedicine.2024; 245: 108013. CrossRef
“Brave New World” or not?: A mixed-methods study of the relationship between second language writing learners’ perceptions of ChatGPT, behaviors of using ChatGPT, and writing proficiency Li Dong Current Psychology.2024; 43(21): 19481. CrossRef
Evaluating the Influence of Artificial Intelligence on Scholarly Research: A Study Focused on Academics Tosin Ekundayo, Zafarullah Khan, Sabiha Nuzhat, Tze Wei Liew Human Behavior and Emerging Technologies.2024;[Epub] CrossRef
Interaction with Artificial Intelligence as a Potential of Foreign Language Teaching Program in Graduate School T. V. Potemkina, Yu. A. Avdeeva, U. Yu. Ivanova Vysshee Obrazovanie v Rossii = Higher Education in Russia.2024; 33(5): 67. CrossRef
Did ChatGPT ask or agree to be a (co)author? ChatGPT authorship reflects the wider problem of inappropriate authorship practices Bor Luen Tang Science Editing.2024; 11(2): 93. CrossRef
Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic Sun Huh Science Editing.2023; 10(1): 1. CrossRef
ChatGPT: Systematic Review, Applications, and Agenda for Multidisciplinary Research Harjit Singh, Avneet Singh Journal of Chinese Economic and Business Studies.2023; 21(2): 193. CrossRef
Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer Casey Watters, Michal K. Lemanski Frontiers in Big Data.2023;[Epub] CrossRef
ChatGPT, yabancı dil öğrencisinin güvenilir yapay zekâ sohbet arkadaşı mıdır? Şule ÇINAR YAĞCI, Tugba AYDIN YILDIZ RumeliDE Dil ve Edebiyat Araştırmaları Dergisi.2023; (37): 1315. CrossRef