Development of a decision-support tool to quantify authorship contributions in clinical trial publications
Article information
Abstract
Purpose
This study aimed to develop a decision-support tool to quantitatively determine authorship in clinical trial publications.
Methods
The tool was developed in three phases: consolidation of authorship recommendations from the Good Publication Practice (GPP) and International Committee of Medical Journal Editors (ICMJE) guidelines, identifying and scoring attributes using a 5-point Likert scale or a dichotomous scale, and soliciting feedback from editors and researchers.
Results
The authorship criteria stipulated by the ICMJE and GPP recommendations were categorized into 2 Modules. Criterion 1 and the related GPP recommendations formed Module 1 (sub-criteria: contribution to design, data generation, and interpretation), while Module 2 was based on criteria 2 to 4 and the related GPP recommendations (sub-criteria: contribution to manuscript preparation and approval). The two modules with relevant sub-criteria were then differentiated into attributes (n = 17 in Module 1, n = 12 in Module 2). An individual contributor can be scored for each sub-criterion by summing the related attribute values; the sum of sub-criteria scores constituted the module score (Module 1 score: 70 [contribution to conception or design of the study, 20; data acquisition, 7; data analysis, 27; interpretation of data, 16]; Module 2 score: 50 [content development, 27; content review, 18; accountability, 5]). The concept was integrated into Microsoft Excel with adequate formulae and macros. A threshold of 50% for each sub-criterion and each module, with an overall score of 65%, is predefined as qualifying for authorship.
Conclusion
This authorship decision-support tool would be helpful for clinical trial sponsors to assess and provide authorship to deserving contributors.
Introduction
Background/rationale: Disagreements between authors can arise during study planning, conduct, data analysis, manuscript writing, submission, and post-publication phases. The Committee on Publication Ethics (COPE) classifies these disagreements into disputes and misconduct [1]. The guidelines of the International Committee of Medical Journal Editors (ICMJE), COPE, and those of Good Publication Practice (GPP), first proposed in 2003 by the International Society for Medical Publication Professionals, regulate research publications and help determine authorship credit [2]. The third version of GPP (GPP3) was published in 2015, and the fourth edition is expected to release in 2022 [3]. These guidelines are related to publishing industry-funded clinical studies of marketed products and review articles and secondary articles initiated by companies.
Both GPP3 [3] and ICMJE [4] outline the criteria for authorship and are widely accepted. However, with respect to multicentric clinical trials involving several investigators, pharmaceutical companies struggle to attribute appropriate credit to all contributors [5]. The biomedical industry has remarkably acknowledged the role of a key opinion leader [6], often found among authors in clinical trials causing author inflation. In these situations, the guidelines cannot adequately resolve authorship issues, and the team involved must formulate its own strategy.
Objectives: The objective of this study was, therefore, to develop a quantitative decision-support tool that complies with recent ICMJE and GPP guidelines in order to help pharmaceutical companies accurately identify the deserving authors and their order of authorship in publications arising from clinical trials.
Methods
Ethics statement: The authors requested feedback and questions from editors and researchers during networking opportunities such as panel discussions, question-and-answer sessions, and off-the-stage meetings of two editorial conferences held in the United Arab Emirates [7,8] and the Philippines [9]. No sensitive personal information was acquired; therefore, neither institutional review board approval nor informed consent was required.
Study design: This study involved the development of a decision-support tool based on the experience of the authors and the literature data.
Setting: Two authors (SM and HI) initiated the study in January 2015. The tool was developed in 3 phases, which included reviewing the selected authorship guidelines and identifying and categorizing authorship criteria (Phase 1), ranking these elements on a Likert or dichotomous scale (Phase 2), and modifying the scale based on solicited feedback (Phase 3) (Fig. 1).
Phase 1. Consolidation of authorship recommendations from the GPP and ICMJE guidelines
The authors independently reviewed and abstracted all relevant elements of the authorship criteria mentioned in the GPP2 guidelines and the ICMJE recommendations (formerly the Uniform Requirements for Manuscripts). The authors then entered into a consensus meeting in which they discussed the findings, deliberated, identified relevant clinical trial and publication development processes, and systematically documented them.
Phase 2. Ranking the attributes
The third author (PV) scrutinized all responses, identified keywords, removed redundancies, and segregated them into different modules (with sub-criteria and attributes) based on the congruity and different steps in the clinical trial process and developing clinical trial publications. Wherever possible, each attribute was ranked on a 5-point Likert scale based on its relative relevance and importance (1 = least important and 5 = highly important) in determining authorship. If an attribute could not be ranked on the Likert scale, the responses were collected as ‘yes’ or ‘no’ (dichotomous scale), which were further converted to binary (no = 0 and yes = 1).
These data were then incorporated into Microsoft Excel (Microsoft, Redmond, WA, USA), with filters, formulas, and drop-down menus. Using this Microsoft Excel sheet, an individual contributor could be scored for all attributes. The sum of the attribute scores of each module and sub-criterion could also be determined. Further, a threshold for the overall score and individual modules was proposed to decide the eligibility and order of authorship.
Phase 3. Soliciting feedback and modifications
The prototype of this tool was presented at two conferences [7-9]. The authors solicited feedback and questions from editors and researchers during the networking opportunities.
Statistical methods: No statistical analysis was performed. Suggestions by researchers and editors were reflected in the tool development.
Results
Phase 1. Consolidation of authorship recommendations from the GPP and ICMJE guidelines
After review, discussion and deliberations, all authors agreed on the key elements (Table 1) from both guidelines. The authors also identified and documented clinical trial publication processes relevant to these elements.
Phase 2. Ranking the attributes
The ICMJE provides global criteria for authorship, and GPP deciphers these guidelines in the context of the conduct of clinical trials and developing related publications by clinical trial sponsors. Considering this, the third author (PV) scrutinized all responses, and the four criteria of authorship stipulated by ICMJE were categorized into two modules: criterion 1 in Module 1 (contribution to design, data generation, and interpretation); and criteria 2 to 4 in Module 2 (contribution to manuscript preparation and approval). Both modules were further divided into the relevant sub-criteria (Module 1: study concept or design, data acquisition, data analysis, data interpretation; Module 2: content development, content review, accountability) (Fig. 2). With the help of GPP, different attributes related to each of these sub-criteria were identified (overall 26 attributes).
All attributes were ranked on a Likert scale (0–5) or a dichotomous scale (yes = 1 and no = 0). In Module 1, the following attributes were ranked on a Likert scale: critical review of the protocol, participation in scientific advisory boards/study meetings, planning and conduct of the study, contribution to statistical analysis, contribution to data cleaning in electronic data capture or tables/listings/figures reviews, and direction of the team to conclusions regarding the critical study results. While attributes such as writing the protocol/strategic direction, active involvement in implementing data collection and data management activities, statistical analysis of the data, and preparation of reports from the data analysis to help the team understand the conclusions were ranked on a dichotomous scale. Furthermore, in the category of data acquisition, the number of patients planned to be completed for each investigator; the number of patients screened, randomized, completed; and their respective indices (randomized:screening, completed:planning, and completed:randomized indices) were also noted.
In Module 2, attributes such as writing most of the initial draft, providing an outline or strategic input for the manuscript, communicating with other contributors/medical writers during the drafting stage, participating in the content review on time, and critical review of the content were ranked on a Likert scale. On a dichotomous scale, the following attributes of the publication steering committee were ranked: anticipating and communicating issues related to sponsor proprietary information and intellectual property, complying with the organizational publication policy and other ethical guidelines and journal instructions (also agreeing to avoid premature publication or release of study information and duplicate publications), disclosing any potential conflicts of interest and appropriately acknowledging support from any source (including funding), reading and approving the final version of the manuscript, agreeing to be responsible for all aspects of the study, ensuring that questions related to the accuracy or integrity were appropriately investigated and resolved, and being able to identify coauthors (accountable for the integrity of the contributions). After ranking, certain attributes supposed to have higher relative importance to authorship, as well as the authorship order, were further multiplied by a number between 2 and 5. The sum of all attributes could provide a maximum score of 120 (Module 1, 70; Module 2, 50).
It was assumed that to qualify for authorship, of the maximum possible score of 120, every contributor should receive a score of at least 60 (50%) for each module (Module 1, 35; Module 2, 25), while the overall score should be at least 78 (65%). The first author is the contributor who receives the highest score, followed by others in order. However, the senior (last) author should be the one who scores the maximum for the concept or design of the study (Module 1.1), content (Module 2.2), and accountability (Module 2.3). A similar approach can be taken to decide the sequence of authors if multiple authors have equal scores. The corresponding author has the maximum score for content development (Module 2.1) and accountability (Module 2.3).
Phase 3. Soliciting feedback and modifications
The tool was well received at both conferences [7-9]. Feedback included suggestions to update the tool to reflect the revisions of the GPP and ICMJE guidelines. The majority of the audience were researchers, journal editors, and professional medical writers worldwide. Some editors showed interest in using this tool as an optional requirement for authors submitting publications arising from clinical trials. However, upon further discussion, it was decided to increase the acceptability of this tool by disseminating it as a publication in a relevant medical journal. We honored the feedback and updated the tool with recent 2015 GPP3 and 2019 ICMJE guidelines.
Use of the authorship decision-support tool
The authorship decision-support tool was incorporated into the Microsoft Excel program with appropriate formulae and macros. The user needs to input each contributor’s name and score them against each of the attributes using drop-down menus. This tool also prompts the user to include all contributors who participated in the activities mentioned in Module 1 when the user completes the encoding of Module 1 and before the beginning of the manuscript drafting stage (Module 2). The tool automatically calculates each contributor’s total score, along with scores for each module and subcategory. It also identifies the contributing and non-contributing authors, senior author, and corresponding author and rearranges them based on the order. The chair of the publication steering committee or the designee in consultation with the contributor can input the contributor’s name and score him or her for each of the relevant attributes (Fig. 3).
Discussion
Key results: This quantitative authorship decision-support tool presented in this article includes 26 attributes. We developed the decision-support tool to quantify authorship contributions in clinical trial publications by segregating the attributes into two modules and then finally integrating them into the Microsoft Excel program.
Interpretation: It may not be fair to expect each author to be an ‘equal’ contributor. However, every author should contribute substantially and have a reasonable sense of accountability. Here, the significance of quantifying their role as contributing authors comes into play.
The ICMJE and GPP have provided a conceptual basis for authorship [3,4]. Still, with the ever-expanding field of clinical research publications and evolving transparency and ethical requirements, there are often deficiencies or limitations in putting these guidelines into practice.
After carefully understanding these guidelines and the importance of several clinical trial processes in publication, the attributes were taught and ranked according to their prominence in authorship. For the attributes, which essentially reflect the views and perspectives of the respondent, a 5-point unidimensional psychometric scale, the Likert scale, was used [10]. In contrast, for attributes expected to have absolute responses, a 2-point dichotomous scale was used [11].
A significant contribution to the criteria mentioned in both modules is essential to entitle the claim of authorship. Therefore, we proposed a score threshold of 60 (50.0%) for each module (Module 1, 35; Module 2, 25) and an overall score of 78 (65.0%) to be eligible for authorship. Using this threshold would help eliminate the chances of providing authorship to every (non-substantial) contributor associated with the clinical trial. However, this is only a recommendation, and users can choose an appropriate threshold based on the unique procedural and scientific circumstances of a study.
Comparison with previous studies: Bhopal et al. [12] introduced a democratic method to score credits. They devised a list of 14 points and passed the onus of scoring each author to the other coauthors. The process is anonymized, and each individual is made to score the others, excluding himself or herself. The final authorship order is then agreed upon by the whole team.
The Authorship Order Score, proposed by Masud et al. [13], consists of 13 Likert-scale items based on four factors: conception, planning, execution, and writing. The author sequence is based on the final sum of the scores, ranging from 0 to 100. A simple percentage-based score, called the Author Contribution Index, only quantifies the contribution of each author relative to the others [14]. In another percentage-based system, called the Quantitative Uniform Authorship Declaration, there are four categories by which the percentages are calculated: conception and design, data collection, data analysis and conclusion, and manuscript preparation [15]. Warrender’s system provides scores rather than percentages based on four aspects (conception and design, data acquisition, analysis and interpretation, and manuscript preparation) [16]. Another unique matrix-based system uses four factors (ideas, work, writing, and stewardship). One should score each category, and the total sum should not be more than 1. This limit of 1 helps eliminate over-scoring and requires the user to provide a reasonable score, keeping in mind the true role of an author and giving a well-balanced score for each category [17].
Another formula-based scoring system, known as the Authorship Index (AuI), calculates the literary contribution of an author [18], where the corresponding and first authors receive a score of 1. The maximum score can be 100, and the sequence of the author in the authorship list in each of his or her publications will determine his or her final score. This seems reasonable because the ‘number’ of publications would no longer matter. However, this ‘sequence-determines-credit’ system in itself is not an impartial perception. Besides this, there is no set score given by AuI that can be called a ‘good’ AuI score. The CRediT–Contributor Roles Taxonomy is a simple 14-point chart [19]. However, it does not reflect the actual role or degree of contribution of each author compared to the others. The CRediT system is more of a self-declaration form that can be provided to journals when submitting an article.
Our quantitative authorship decision-support tool has certain advantages. First, we have attempted to identify specific processes in a clinical trial and its subsequent publications relevant to the different elements of these two guidelines. Further, the combined use of both Likert and dichotomous scales provided greater flexibility for the tool to accommodate attributes with different characteristics. This tool also has provisions to overcome issues arising when two or more authors receive an equal score.
Limitations: It can be argued that scoring attributes using a Likert scale is qualitative in a certain sense and prone to subjective decisions; however, there are no better alternatives to score these attributes at present. In addition, this tool still has not been systematically applied to any clinical trial publication process to check how it works.
Conclusion: As the concept of this authorship decision-support tool is incorporated into the widely used Microsoft Excel application, it is very intuitive and easy to use. The threshold of the percentage score to become a contributory author helps clinical trial sponsors reduce author inflation. This would also help medical journals ensure that the authorship is properly distributed. We recommend that clinical trial sponsors and peers use this tool to decipher authorship and that medical journals should encourage authors to submit the outcome from the tool along with manuscripts. As a next step, we will systematically review relevant information from COPE, the Council of Science Editors, the World Association of Medical Journal Editors, other professional bodies, reputed publishing houses, and institutions to ensure that the attributes and criteria are current and comprehensive. We plan to further develop this tool as a web-based system with a better user interface using HTML, CSS, bootstrap, or Codelgniter with MVC architecture.
Notes
Conflict of Interest
Sam T. Mathew has been an editorial board member of Science Editing since 2014. He was not involved in the review process. Otherwise, no potential conflict of interest relevant to this article was reported. The opinions expressed in this article are the authors’ personal views and do not represent that of their affiliated organizations.
Funding
The authors received no financial support for this article.
Data Availability
All data generated or analyzed during this study are available from the corresponding author.
Acknowledgements
The authors thank the organizers of the Second International Congress on Medical Writing (March 2015), Ajman, UAE for the opportunity to present this concept as an oral presentation. The authors acknowledge the insightful suggestions from Mr. Tom Lang (USA), and Mr. Shaukat Ali Jawaid (Pakistan) during this conference. The authors also thank the organizers of the Asia Pacific Association of Medical Journal Editors Convention (August 2015), Manila, the Philippines for selecting this presentation for a panel discussion and the panelists, Dr. Trish Groves (UK), Prof. Dr. Aik Saw (Malaysia), Prof. Dr. Jose Lapena (Philippines), and Mr. Martin Delahunty (UK) for their feedback and suggestions.