If you regularly hear colleagues talking about things like Impact Factor, SJR, SNIP, CiteScore, or Google Scholar Metrics but don’t fully understand what these mean—we can help you. Here we provide a quick guide to the most relevant journal metrics that you can use when selecting a journal for your paper.
When you write a paper for a journal, at some stage—sooner rather than later—you must identify the journal you want to submit to. With the high number of journals available, this selection is not an easy exercise. You need to consider many factors in your selection: the aims and scope of the journal, the journal audience, the relevance of your paper for the journal, the journal’s popularity, their ranking and impact, their visibility in the field, their accessibility to your peers, their peer-review speed, or their acceptance rate.
It can easily feel very confusing when your co-authors are rattling off journal numbers, scores, and metrics that you’ve heard about, but that you don’t exactly understand. When it’s entirely up to you to decide which journal to go for, which journal shall you really pick? Which of these indicators should you consider? What do they mean?
Knowing the most relevant journal metrics can help you to make a better decision regarding which journal you’ll aim for. We prepared the factsheet Journal metrics for you with a quick overview of the metrics discussed here that you can download for free.
1. What are journal metrics?
Journal metrics measure, compare, and rank journals as outlets for scholarly research publications. They aim to measure the impact and rank of a journal within a research field, as well as their usage and publication speed.
The oldest journal metric is the Journal Impact Factor, published since the mid-1970s. Since then, many alternative metrics have been created by publishers and related companies. Most of them use the number of citations of items published in the journal for building their metric.
Besides journal metrics, you can also find specific metrics for authors, for articles, data, or for the use and distribution of scholarly work on social media (Altmetrics). Here, we’ll focus on journal metrics alone.
2. Why use journal metrics?
Journals are not all the same. They differ in the quality of their contributions, in their visibility and prestige in their fields, audience size, and the number and topics of papers they publish. If you’ve done good research and spent a lot of time and care on writing a well-crafted paper, you might want to publish this paper in a good journal. But how do you identify a good journal? Journal metrics aim to support you in making a decision. It’s in the nature of each metric that they measure slightly different things based on the different methodologies and sources they use. Therefore, do not necessarily rely on the result of one metric alone when making a decision for a journal.
Journal metrics can also play an important role in research assessment and management, where the indicators are used to judge the qualities of researchers and their research output. They can also play a role in academic career development, or when you’re applying for an academic job.
3. Where do you find these metrics?
Two platforms provide access to most of the journal metrics: Web of Science and SCOPUS. They both calculate the metrics from the pool of journals indexed in their databases. But you can also find most metrics on their own separate websites, or, you’ll find specific data on individual journal’s websites.
4. How not to use journal metrics?
Journal metrics—or journal rankings in general—need to be used with caution! Journal metrics are not universal for all disciplines, and they’re built on different time frames which can make them difficult to compare. They are also created only to compare journals with one another—not papers, authors, or institutes.
Therefore, they should not be used to assume that any paper within a journal or any author published within a journal is reflecting the journal’s rank as expressed by the journal metric. You can find great papers with a large impact in lower-ranked journals, and not every paper in a high-ranked journal is a great one or has a great impact.
There is a huge debate over the use and misuse of journal metrics in research assessment and career development. See Simons (2008), Pendlebury (2009), Marks et al. (2013), and Hicks et al. (2015) for further details. The San Francisco Declaration of Research Assessment, known as the DORA statement, recommends that journal-based metrics should not be used as a proxy for assessing the research quality of articles or researchers, or play a role in hiring or promotions. The Metric Tide Report of the UK governments’ research council suggests at least a combination or a variety of journal metrics rather than a single one be used in any form of assessment. So, use the metrics mentioned below with care!
5. Impact factor
The Journal Impact Factor or Impact Factor—often abbreviated as JIF or IF—is the most commonly used journal metric. It is built on citations of papers published within a journal. It provides the average number of times articles from a journal published in the past two years have been cited in a specific year.
It is calculated by dividing the number of citations in a year by the total number of articles published in previous years. It is therefore called the 2-year IF or the 5-year IF where the timeframe for the citation is divided by either 2 or 5 years. The 5-year IF tends to be higher than the 2-year one. For smaller journals, or research fields where it takes a bit longer for work to be cited, the 5-year IF is preferred.
Not all journals have an IF—only those included in the Science and Social Science Citation Indexes. The IF for each journal covered is published annually in summer in the Journal Citation Report, a sub-database of the Web of Science by Clarivate. Higher IF values indicate a journal deserving of higher prestige than lower IF values. Reflecting the yearly average number of citations of articles in a journal, an IF of 2.0 means that on average, a paper published in this journal will be cited twice per year. The IF is not suitable for comparison of impact across academic disciplines, but provides the ranked value within one field.
See Larivière & Sugimoto (2019) for further details on the history, critique and the effect of the IF.
6. Immediacy index
The Immediacy Index from Web of Science expresses the average number of times an article is cited the year it is published. It indicates how quickly articles in a journal are cited. It is of particular relevance for journals in topic-areas with high urgency or cutting-edge research, where it is important to get the research out as quickly as possible. It is calculated by dividing the number of citations to papers published in a given year by the number of papers published in that year.
The Eigenfactor Score—accessible through the Web of Science—rates the total importance of a journal by not only counting the number of incoming citations to a journal, but by also considering the significance of these citations. Citations coming from a higher-ranked journal make a larger contribution to the Eigenfactor than from lower-ranked journals. The Eigenfactor scales with the size and importance of the journal. Journals with higher Eigenfactor have a higher impact on their fields. It is provided by a project initiated by two researchers from the Univ. of Washington, Eigenfactor.org.
Source Normalised Impact per Paper (SNIP) is a field normalised assessment of journal impact, which means it can be used for cross-comparison between fields of research with different publication and citation practices. Developed by Leiden University and using the SCOPUS database, it measures the citation potential as the number of citations that a journal would be expected to receive for its subject field. SNIP is calculated in a 3-year time frame. Citations are weighted by the citation potential of the journal’s subject category. SNIP considers the context of a citation: the value of a single citation in a field where citations are less likely is greater than in fields where citations are very common.
The Scimago Journal & Country Rank (SJR) is based on weighted citations in a specific year to papers published in the previous 3 years. Citations are weighted by the prestige of the citing journal so that a citation from a top journal will have more impact than a citation from a low-ranked journal. SJR uses the SCOPUS database, and is calculated annually considering both the number and the relevance of the incoming citations.
10. Google Scholar Metrics h5
Google’s h5 Index is not strictly a journal metric, but builds on the h-factor which is used as an indicator to measure the impact of a single author. Google’s Scholar Metric h5 is based on the papers published by a journal over the previous 5 complete calendar years with a minimum of 100 papers in this period. If a journal publishes 100 papers sooner, an h5 Index can be calculated earlier. It says h is the largest number of articles that have each been cited h times. The h5 Index therefore cannot be dominated by one or several highly cited articles.
The CiteScore metric by Elsevier is very similar to the Impact Factor. It measures the average number of citations received by articles published in a journal during a four year window. It’s available for journals indexed in the SCOPUS database. The CiteScore calculation only considers content that is typically peer-reviewed; such as articles, reviews, conference papers, book chapters, and data papers.
A simple but effective measure is to look at the number of times a journals’ papers are viewed and downloaded. Larger journals will of course have higher numbers of readers, and since the numbers can only be captured on the publisher’s website, not all views and downloads will be analyzed by the metric. And, naturally, downloading or viewing a paper doesn’t mean it is read.
When selecting a journal for a paper, you can hardly ignore current journal metrics. To get a good picture of a journal’s performance and its ranking in your field, consider looking at several journal metrics. Download our free factsheet Journal metrics which gives you a quick overview.
- Factsheet Journal Metrics
- Elsevier, 2021. Measuring a journal’s impact.
- Hicks, D. et al. 2015. Bibliometrics: The Leiden Manifesto for research metrics. Nature 520, 429–431.
- Larivière, V., Sugimoto, C.R. 2019. The Journal Impact Factor: A Brief History, Critique, and Discussion of Adverse Effects. In: Glänzel, W., Moed, H.F., Schmoch, U., Thelwall, M. (eds) Springer Handbook of Science and Technology Indicators. Springer Handbooks. Springer, Cham.
- Marks, M.S. et al. 2013. Editorial. Misuse of journal impact factors in scientific assessment. Traffic 14, 611-612.
- Metric Tide Report. 2015. Report of the Independent Review of the Role of Metrics in Research Assessment and Management.
- Pendlebury, D.A. 2009. The use and misuse of journal metrics and other citation indicators.Archivum Immunologiae et Therapiae Experimentalis 57, 1-11. DOI 10.1007/s00005-009-0008-y.
- San Francisco Declaration of Research Assessment (DORA), 2012.
- Simons, K. 2008. The Misused Impact Factor. Science,10 Oct 2008, Vol. 322, Issue 5899, pp. 165, DOI: 10.1126/science.1165316.
- Springer, 2021. Journal metrics.
- Taylor & Francis, 2021. Understanding journal metrics.
- Wiley Authors Services, 2021. Understand journal metrics before you submit.
Do you want to successfully write and publish a journal paper? If so, please sign up to receive our free guides.
Photo by unsplash.com
© 2021 Tress Academic
#WritingPapers, #PaperWriting, #JournalMetrics #PerformanceIndicators #Bibliometrics #ImpactFactor #SNIP