Skip to Main Content

Common Metrics: Common Metrics

Common Metrics

Common metrics used in assessing research are quantitative metrics based on the number of outputs published, and the number of citations these outputs have received. They can be split to Author metrics, Article Metrics, sometimes known as publication metrics, and Journal Metrics

How can metrics be used?

  • Help Decision Making
  • As part of CVs and/or funding applications
  • Assess how a piece of research has performed
  • In some international league tables of universities

To find out about specific metrics and their uses, see the accompanying glossary.

Information about other, complementary metrics can be found on our guide to altmetrics.

Article metrics

Article Metrics

Article metrics typically look at the number of times an article has been cited. They can be as simple as a count of citations over the life of the article or algorithms may be used to normalise an article in its discipline and/or age. 

You can find article metrics in databases such as Web of Science, Scopus or Google Scholar and on the online publication version as widgets, that, as well as citations, can show views and downloads.

Articles should only be compared with articles of a similar age and discipline.  Please visit the Responsible Metrics Page for more information.

 

Author Metrics

Author metrics

Author metrics in their simplest form tell you what an author has published and how many times they have been cited.

There are multiple sources that will enable you to keep track of an author's output e.g. Scopus, Web of Science, Google Scholar. Each database indexes outputs and records citations in different ways so you should always say which database you used and when.

You should always review a publication list critically as papers may be wrongly attributed to an author, particularly if they have a common name. Many databases now use an author Identifier to minimise/prevent this and there is a drive within the academic publishing community for authors to use personal identifier schemes such as ORCID. 

Journal Metrics

Journal Metrics

Journal level Metrics measure, compare, and often rank research journals and scholarly publications. They are quantitative tools for evaluating the relative importance of a journal over a set time period. Journal-level metrics should be used to support expert judgment of yourself and your colleagues when looking for a venue to publish in. They should never be used to assess an article or an author.

Common journal metrics sources are the Journal Citation Reports (JCR) service via Clarivate and SCImago Journal Rank (SJR) from Scopus.  Other journal metrics use the JCR and the SJR as a baseline for their own algorithms - eg Eigenfactors and Source Normalized Impact per Paper (SNIP). Please see the Glossary for more information and links.

Appropriate use - Use with caution

While all metrics should be used responsibly and appropriately there are some metrics that should be used carefully

H-Index

The h-index uses a calculation based on the citation rates of an author's published papers as a way of assessing the impact of an author's publications. The h-index is the maximum value (h) based on the number of papers (h) an author has published that have each been cited at least (h) times. e.g. a h-index of 20 means the researcher has 20 papers each of which has been cited 20+ times.

The h-index is strongly influenced by discipline, publication volume, career length and effective career length. This makes it very difficult to use responsibly and consistently, especially when assessing Early Career Researchers or individuals with protected characteristics. The h-index has been severely criticised by some funders, including UKRI.

There are some variations of the h-index that try to improve it, but it is not clear if any can be used responsibly. We recommend avoiding the h-index, but if it is required, please contact the Library for support.

Journal Impact Factor (JIF)

The Journal Impact Factor is a quantitative tool for evaluating the relative importance of a journal. It is a measure of the frequency with which its published papers are cited up to two years after publication. Journal Impact Factors are available on the Journal Citation Reports (JCR) available via Web of Science.

The impact factor can only be used as an indication of a journal’s overall influence; it cannot be used to assess the quality of individual papers or the work of specific individuals. It is necessary to be aware of the selective coverage of titles by the JCR, and the danger of placing too much reliance on the methods used.

For example:

  • There can be some disagreement about whether letters, editorials, etc. should count as a ‘citable’ article.
  • Those journals that publish more review articles are likely to get a higher number of cites than those which publish more research reports.
  • You can only compare impact factors within a subject area. Differences in citation patterns can make any other comparison meaningless. E.g. the average for medicine (general) is 4.391, for Maths 0.794.

You should only use the Journal Impact Factor to support expert judgement of yourself and your colleagues e.g. as a tiebreak when choosing a publication venue and you should use more than one metric. For example, consider subject normalised metric(s) like the Article Influence Score (JCR) and the SNIP (Scopus).