Abstract
 Various kinds of metrics used for the quantitative evaluation of scholarly journals are reviewed. The impact factor and related metrics including the immediacy index and the aggregate impact factor, which are provided by the Journal Citation Reports, are explained in detail. The Eigenfactor score and the article influence score are also reviewed. In addition, journal metrics such as CiteScore, Source Normalized Impact per Paper, SCImago Journal Rank, hindex, and gindex are discussed. Limitations and problems that these metrics have are pointed out. We should be cautious to rely on those quantitative measures too much when we evaluate journals or researchers.

Keywords: Eigenfactor score; hindex; Impact factor; Journal metrics; SCImago Journal Rank; Source Normalized Impact per Paper
Introduction
 There exist a variety of metrics that are used to indicate the level and the influence of scholarly journals. Most of these metrics are obtained by analyzing the citation data of journal articles. Among them, the impact factor is the bestknown and most influential index. This index is calculated by a very simple and easy method, but it also has several problems. A number of other metrics have been proposed for the purpose of correcting these problems and providing more reliable estimates. In the present review, we introduce the definitions of several journal metrics and the methods to calculate them and explain their characteristics and defects briefly.
Impact Factor and Related Metrics
 The idea of impact factor was proposed by Eugene Garfield in 1955 [1]. The Science Citation Index (SCI) was created based on this idea in 1964 and a quantitative evaluation of scholarly journals was launched for the first time. This index is annually announced in the Journal Citation Reports (JCR), which is currently managed by Clarivate Analytics, and is widely used by academic communities. Many related indices are also announced in the JCR.
 Impact factor, 5year impact factor, immediacy index, and impact factor without self cites
 In a given year, the impact factor of a certain journal is defined as the average value of citations per paper received by the items published in the journal in two previous years. More specifically, its definition is given by
 Impact factor of the journal J in the year X=A/B,
 where A is the number of total citations in the year X received by all items published in the journal J in the years (X1) and (X2) and B is the total number of all citable items published in the journal J in the years (X1) and (X2). Citable items include only papers and reviews and do not include errata, editorials and abstracts. In the counting of A, however, citations to all items published in J are included.
 The 5year impact factor in the year X is similar to the ordinary (2year) impact factor, except that it is calculated using the citation data during the 5 years from the year (X1) to the year (X5). This index is useful in the academic disciplines where the number of citations is small or it takes some time for published results to be accepted by many researchers. On the other hand, the immediacy index is calculated similarly to the impact factor using the total number of citations received in the year X by all items published in the same year X. If this index is large, it means that the papers published in that journal are cited rather quickly.
 The journal selfcitation means the case where a paper published in the journal J is cited in the same journal. In the JCR, the impact factor without self cites, which is obtained after excluding journal selfcitations, is also announced. If the difference between the impact factor and the impact factor without self cites is significantly large for a certain journal, sometimes that journal is excluded from the JCR list.
 Cited halflife and citing halflife
 The cited halflife is calculated using the number of citations received in the year X by all items published in a certain journal in all years. For example, let us suppose that the journal J received 1,285 citations in 2017. In Table 1, we show the (hypothetical) number of citations and the cumulative percentage classified by the published year of cited items. We find that the cumulative percentage becomes 50% between 2009 and 2008. If we assume that papers were cited equally in every month and calculate the year when the cumulative percentage becomes 50% up to the first digit after the decimal point, then we find that the cited halflife is 9.1 years. This index measures for how long the published contents are cited. In a similar manner, one can calculate the citing halflife using the papers cited by the journal J.
 Median impact factor and aggregate impact factor
 There is a problem with the impact factor in that it shows rather large variations among academic disciplines. For that reason, the JCR classifies journals based on the subject category and provides several metrics representing each category. The median impact factor is that of the journal placed precisely in the middle when the journals in a certain category are arranged in the order of their impact factors. When the total number of journals in the category, N, is an odd number, it is the impact factor of the [1+(N1)/2]th journal. When N is even, it is the average of the impact factors of the (N/2)th and [1+N/2]th journals.
 The aggregate impact factor is obtained by dividing the total number of citations received by all items published in all journals in a certain category in the year X by the total number of citable items published in all journals in that category in the years (X1) and (X2). Since the distribution of impact factors is not linear but highly skewed, the aggregate impact factor tends to be substantially larger than the median impact factor, as can be seen in Table 2. The aggregate immediacy index, the aggregate cited halflife, and the aggregate citing halflife are also provided in the JCR.
 Problems of the impact factor and the editorial ethics
 As we mentioned already, there is a problem with the impact factor in that it shows large variations among academic disciplines. In Table 2, we show the aggregate impact factor, the median impact factor, the aggregate cited halflife, and the average number of citations per paper for several subject categories listed in the JCR in 2011 and 2013. We notice a trend that the impact factors are usually larger in the disciplines where more papers are cited on average and the cited halflife is shorter.
 The impact factor is obtained by the arithmetic mean of the number of citations received by the items published in a certain journal. However, it is wellknown that the distribution of the number of citations in a given journal is highly skewed. There exists a tendency that the impact factor overestimates the importance of individual papers. In other words, most papers are cited substantially less than what the journal impact factor indicates. Therefore it is not accurate to judge the quality of an individual paper or researcher based on the journal impact factor.
 As competition among scholarly journals becomes stronger, it sometimes occurs that some journal editors adopt policies to manipulate journal impact factor deliberately. One practice that is ethically troubling is to induce authors to do journal selfcitation. Publishing more review papers than is necessary and publishing papers which have a higher chance of citation deliberately at the beginning of a year are similar practices. This behavior occurs because too much importance is given to the impact factor and distorts the metric unfairly.
Eigenfactor Score and Article Influence Score
 The Eigenfactor score and the journal influence score were developed by Bergstrom et al. [2] to overcome the defects of the impact factor and have been provided by the JCR since 2007. The concept of Eigenfactor is based on the theory of complex networks. For its calculation, one uses a method similar to the PageRank algorithm, which was proposed by Brin and Page [3] and has been used in the Google search engine. In order to calculate the Eigenfactor score, we first define a database consisting of N journals and construct an N×N matrix H, the ij component of which is given by
 where Z_{ij} represents the number of citations in the journal j in the year X received by the items published in the journal i during the five years from the year (X5) to the year (X1). Since journal selfcitations are excluded in the calculation of the Eigenfactor score, all diagonal elements of the matrix Z are zero. Next we define a vector, a, called the article vector. The ith component of this vector, a_{i}, is obtained by dividing the total number of papers published in the journal i during the 5 years from the year (X5) to the year (X1) by the total number of papers published in the whole database during the same period. In the calculation of this kind of problem, one needs to take a special care of the dangling nodes and the dangling clusters. An example of the dangling node is the case where a certain journal j does not cite any of the journals in the database, but its papers are cited by other journals. Then the matrix elements Z_{kj} are zero for all k. Since the jth column of the matrix H is undefined, it is necessary to replace this column by a suitable vector. We define a matrix H^{*}, which is obtained by replacing all columns corresponding to the dangling nodes by the article vector a, and then introduce an N×N matrix P given by
 where α is an appropriate constant and is usually selected to be 0.85. The journal influence vector, v, is defined to be the eigenvector corresponding to the largest eigenvalue of the matrix P. The ith component of the vector v has the meaning of the weighting factor representing the relative importance of the journal i in the group of journals in the database. Finally, the Eigenfactor score of the journal i, F_{i}, is calculated using
 According to this definition, the sum of all Eigenfactor scores for all journals in the database is equal to 100. Since this quantity is not normalized by the total number of papers published in a given journal, it tends to be larger for journals publishing larger number of papers, if all other conditions are the same. A useful characteristic of the Eigenfactor is that it makes it possible to compare journals belonging to different academic disciplines directly because those differences are adjusted for in this metric. The article influence score I_{i} measures the influence of individual papers published in the journal i and is defined by
 This quantity can be used as an alternative to the impact factor. The mean article in the entire JCR database has an article influence of 1.
CiteScore, Source Normalized Impact per Paper, and SCImago Journal Rank
 In this section, we review three journal metrics provided by the Scopus database, which are the CiteScore, the Source Normalized Impact per Paper (SNIP), and the SCImago Journal Rank (SJR).
 CiteScore
 The CiteScore is very similar to the impact factor. It is calculated using the Scopus data and is defined as the average value of citations per item received by the items published in the journal in three previous years, rather than in two previous years as in the case of the impact factor. Another difference from the impact factor is that both numerator and denominator include all document types.
 SNIP
 The SNIP was proposed by Moed [4] as a metric that adjusts for different citation patterns across different academic disciplines. This metric is provided in the Scopus and can be used instead of the impact factor. The SNIP is defined as
 SNIP=RIP/RDCP,
 where the acronyms RIP and RDCP stand for “raw impact per paper” and “relative database citation potential” respectively. The RIP is the number of citations in the year X received by the papers published in the three previous years, (X1), (X2), and (X3) in a certain journal divided by the total number of papers. It is similar to the impact factor, except that the 3year citation window is used and only citations of papers are included and those of errata and editorials are excluded. In order to define the RDCP, one needs to define the DCP, which means the database citation potential, first. Let us consider the references of the papers which cited in the year X the papers published in a certain journal in the three previous years, (X1), (X2), and (X3). Among these references, we consider only the references published during the same 3year period. The DCP is obtained by dividing the total number of those references by the number of citing papers. In this calculation, only citations of the journals belonging to the database are included and other journals are ignored. The RDCP is obtained by normalizing the DCP by the median DCP of the database.
 SJR
 The SJR is provided by the Scopus together with the SNIP [5]. It is calculated iteratively in the following manner. First, one introduces a vector S, which is meant to represent the relative importance of the journals belonging to the database of N journals. S_{i} is the weighting factor of the journal i. In the first stage of the iteration, the values of S_{i} are assigned arbitrarily. The final result does not depend on the choice of the initial values. In the next step, the updated values of S_{i} are calculated using the formula
 where the constants d and e are chosen to be d=0.85 and e=0.1 and the matrix H* and the article vector a are defined similarly to the case of the Eigenfactor calculation, except that the 3year citation window is used. Using the updated values of S_{i}, new calculations are repeated until all values converge. Finally, the SJR of the journal i is calculated using
 where A_{i} is the total number of papers published in the journal i during the 3year period.
hindex, gindex, and i10index
 The hindex was proposed by Hirsch in 2005 [6] as a new metric for evaluating the ability researcher. This index is calculated using all citations received by the papers published by a specific researcher. If we arrange those papers in the order of citations received by them and if h papers are cited a least h times, then the maximum number of h is the hindex of that researcher. Since it is possible to assign an hindex to the group of papers published in a specific journal in a specific year, it can be used also as a journal metric.
 Since the hindex is obtained by using the total number of citations of each paper, it increases monotonically with time. It has a shortcoming that researchers with a small number of very influential papers have low indices. In order to correct this shortcoming, Leo Egghe proposed a modified index named gindex. This index is defined as the maximum value of g when g papers among a certain group of papers were cited at least g^{2} times. The gindex is always larger than the hindex. In addition to the hindex, Google Scholar provides a metric named i10index, which is the total number of papers authored by a certain researcher cited at least 10 times.
Conclusion
 In this review, we have surveyed the definitions and the characteristics of various kinds of metrics used for the quantitative evaluation of scholarly journals. All of these metrics are obtained from the analysis of citation data. In addition to the metrics surveyed here, new kinds of metrics continue to be devised. More recently, interest in alternative metrics, or ‘altmetrics,’ which go beyond conventional citation analysis, has been growing rapidly. We emphasize, however, that no metric is perfect and all metrics have limits and problems. Therefore it is necessary not to rely on quantitative measures too much when we evaluate journals, papers, researchers, and institutions.
Notes

^{} No potential conflict of interest relevant to this article was reported.
Table 1.Number of citations received in 2017 and its cumulative percentage classified in terms of the published year of cited items

2017 
2016 
2015 
2014 
2013 
2012 
2011 
2010 
2009 
2008 
2007–all 
Citations in 2017 
23 
65 
147 
138 
58 
44 
51 
45 
68 
62 
584 
Cumulative percentage 
1.79 
6.85 
18.29 
29.03 
33.54 
36.97 
40.93 
44.44 
49.73 
54.55 
100 
Table 2.The aggregate impact factor, the median factor, the aggregate cited halflife, and the average number of citations per paper for several subject categories listed in the Journal Citation Reports in 2011 and 2013
Subject category 
Aggregate impact factor

Median impact factor

Aggregate cited halflife

Average number of citations per paper

2011 
2013 
2011 
2013 
2011 
2013 
2011 
2013 
Cell biology 
5.760 
5.816 
3.263 
3.333 
6.9 
7.2 
53.4 
55.0 
Chemistry, multidisciplinary 
4.738 
5.222 
1.316 
1.401 
5.9 
5.6 
40.9 
44.6 
Nanoscience & nanotechnology 
4.698 
4.902 
1.918 
1.768 
3.8 
4.1 
35.5 
39.1 
Astronomy & astrophysics 
4.242 
4.462 
1.683 
1.676 
6.8 
7.0 
49.3 
53.2 
Materials science, multidisciplinary 
3.107 
3.535 
1.132 
1.380 
5.2 
5.4 
32.2 
34.8 
Physics, multidisciplinary 
2.680 
2.953 
0.983 
1.300 
7.7 
8.0 
30.4 
33.8 
Engineering, mechanical 
1.232 
1.573 
0.743 
0.889 
7.6 
8.0 
25.2 
28.1 
Mathematics 
0.709 
0.729 
0.561 
0.582 
> 10.0 
> 10.0 
19.8 
21.0 
Citations
Citations to this article as recorded by
 The introspections of contemporary business research: a call for scientific creativity
Kuldeep Singh
Society and Business Review.2024;[Epub] CrossRef  Understanding Trends in Green Accounting Studies: A Bibliometrics Analysis
Sully Kemala Octisari, Dwi Artati, Irman Firmansyah, Arya Samudra Mahardhika, Romandhon, Aris Susetyo, Arif Sapta Yuniarto, Roni Budianto
HOLISTICA – Journal of Business and Public Administration.2024; 15(1): 119. CrossRef  Towards a new paradigm for ‘journal quality’ criteria: a scoping review
Mina Moradzadeh, Shahram Sedghi, Sirous Panahi
Scientometrics.2023; 128(1): 279. CrossRef  Recommendations and guidelines for creating scholarly biomedical journals: A scoping review
Jeremy Y. Ng, Kelly D. Cobey, Saad Ahmed, Valerie Chow, Sharleen G. Maduranayagam, Lucas J. Santoro, Lindsey Sikora, Ana Marusic, Daniel Shanahan, Randy Townsend, Alan Ehrlich, Alfonso Iorio, David Moher, Shahabedin Rahmatizadeh
PLOS ONE.2023; 18(3): e0282168. CrossRef  A multidimensional journal evaluation framework based on the Pareto‐dominated set measured by the Manhattan distance
Xinxin Xu, Ziqiang Zeng, Yurui Chang
Learned Publishing.2023; 36(4): 619. CrossRef  Journal quality criteria: Measurement and significance
O. V. Kirillova, E. V. Tikhonova
Science Editor and Publisher.2022; 7(1): 12. CrossRef  Bibliometric analysis of artificial intelligence algorithms used for microbial fuel cell research
Luis Erick CoyAceves, Benito CoronaVasquez
Water Practice and Technology.2022; 17(10): 2071. CrossRef  Predicting the citation count and CiteScore of journals one year in advance
William L. Croft, JörgRüdiger Sack
Journal of Informetrics.2022; 16(4): 101349. CrossRef  The Journal Citation Indicator has arrived for Emerging Sources Citation Index journals, including the Journal of Educational Evaluation for Health Professions, in June 2021
Sun Huh
Journal of Educational Evaluation for Health Professions.2021; 18: 20. CrossRef  Kind Attention to the Altitude of Altmetrics
Shekar Shobana
Brazilian Dental Journal.2020; 31(5): 457. CrossRef  Comments on “Scientificity and HIndex.”
Ali Yavuz KARAHAN
Acta Medica Alanya.2020; 4(2): 203. CrossRef  Current concepts on bibliometrics: a brief review about impact factor, Eigenfactor score, CiteScore, SCImago Journal Rank, SourceNormalised Impact per Paper, Hindex, and alternative metrics
Ernesto RoldanValadez, Shirley Yoselin SalazarRuiz, Rafael IbarraContreras, Camilo Rios
Irish Journal of Medical Science (1971 ).2019; 188(3): 939. CrossRef  CiteScore metrics: Creating journal metrics from the Scopus citation index
Chris James, Lisa Colledge, Wim Meester, Norman Azoulay, Andrew Plume
Learned Publishing.2019; 32(4): 367. CrossRef  High Impact and Highly Cited PeerReviewed Journal Article Publications by Canadian Occupational Therapy Authors: A Bibliometric Analysis
Ted Brown, YuhShan Ho, Sharon A. Gutman
Occupational Therapy In Health Care.2019; 33(4): 329. CrossRef  Corrective factors for author and journalbased metrics impacted by citations to accommodate for retractions
Judit Dobránszki, Jaime A. Teixeira da Silva
Scientometrics.2019; 121(1): 387. CrossRef  A New Metric for the Analysis of the Scientific Article Citation Network
Livia LinHsuan Chang, Frederick Kin Hing Phoa, Junji Nakano
IEEE Access.2019; 7: 132027. CrossRef  Bibliographic measures of toptier finance and information systems journals
Thomas Krueger, Jack Shorter
Journal of Applied Research in Higher Education.2019; 12(5): 841. CrossRef  Journal metrics of Clinical and Molecular Hepatology based on the Web of Science Core Collection
Sun Huh
Clinical and Molecular Hepatology.2018; 24(2): 137. CrossRef  Journal Metrics of Infection & Chemotherapy and Current Scholarly Journal Publication Issues
Sun Huh
Infection & Chemotherapy.2018; 50(3): 219. CrossRef