Skip to main content

Measuring Research Impact: Responsible Metrics

Use bibliometrics to measure the impact of your publications

Policy

Contact Us

For help or advice with any Measuring Research Impact / Bibliometrics query please email eprints@soton.ac.uk.

Also see our training offers below to help you understand how to use these resources appropriately.

Calendar

Key Principles

The University of Southampton Responsible Research Metrics Policy sets out terms for using appropriate research metrics responsibly.Signatory of DORA

A brief overview of the key principles is given here, but for full details please familiarise yourself with the policy and the guidance provided.

  • The University acknowledges that there are different practices across and between disciplines.
  • Therefore, assessment processes should be tailored to the needs of the discipline and the role a candidate is being assessed for.
  • The policy is not to standardise or prohibit the use of metrics entirely but rather to establish the principles whereby appropriate metrics are used responsibly, especially concerning equality and diversity.

Where metrics are used, it should be clear and for all to see. For example, if a suite of metrics is going to be used in a personal review alongside qualitative information, the appraisees should be aware of this ahead of time.

It should be clear why a metric is being used. In practice, a question about the research outputs being assessed should be defined before selecting the most appropriate metrics to answer that question.

  • For example, if I am trying to assess the impact of an individual piece of research, it would be inappropriate to use a metric that assesses a journal (such as JIF) as a proxy, but I could use my expert judgement combined with field-weighted citation impact and the Altmetric.com Attention Score compared with articles of a similar age.

Accompany any metrics that are used with explanations in unambiguous plain English to ensure end-users understand the data used, its reliability and limitations

Where possible, normalise metrics to diminish the effects of comparing across disciplines or dates of publication etc. If you cannot normalise, then only compare like for like i.e. make comparisons within a single discipline or a limited period of publication.

Be mindful that chronological time is not the same as effective time. Individuals with protected characteristics may not be directly comparable due to time out of work during the day or prolonged periods of absence.

Consistency of practice is highly important. If you are unable to obtain reliable data or apply the same methods across the entire pool of research being assessed, you should not use them at all.

Know your data and their limitations.

Is the use of a metric necessary and, on balance, is it any more helpful than expert testimony? If not, metrics should not be used.

It is not appropriate to use metrics as the sole source of information to make decisions that will affect an individual’s personal circumstances. This is especially true of employment status, but also personal reputation in a formal or informal context.

Is the metric you are using fit for purpose? All metrics should be tailored to the question being asked. If the methods or data do not fit the aims, do not use the metric at all (e.g. do not use a journal metric to infer the quality for an individual output).

The use of the journal impact factor and other journal rankings for assessing a person and individual outputs is effectively prohibited.

Where possible, others should be able to reproduce the results of metric analysis and where it is not possible to independently verify a result, the appropriateness of that metric should be reassessed.

Where possible, verify observations by using more than one metric.

Legacy issues should not be an excuse. If a metric is no longer fit for purpose, it should not be used. Continuous review of commonly used metrics should take place, especially concerning appropriateness and equality.

Guidance

Before using metrics, ensure you have read and understood the University of Southampton’s Responsible Research Metrics Policy. Remember Journal citation metrics are not an appropriate indicator of the quality of an individual piece of work.

If you would like more information about using these metrics responsibly please contact ePrints@soton.ac.uk.

Altmetrics is a movement to provide information about the attention a research output receives on social media, in tabloids, on TV and radio, Wikipedia, public policy and in other influential public spaces. See our Altmetrics page for more details.

Examples include Altmetric.com (integrated into ePrints Soton) and PlumX (integrated with Scopus and DelphiS).

Like citation metrics, altmetrics should only be used as an imprecise indicator of how much attention an article is receiving relative to articles of the similar age; while excellent scores may be indicative of excellent engagement, the opposite is not necessarily true.PlumX Metrics

Also like citation metrics they are not able to tell the difference between negative and positive cites - i.e. negative media attention can also lead to a high score (e.g. the paper on 'Attractiveness of women with rectovaginal endometriosis' - in both Altmetric and PlumX).

Field-weighting or field-normalisation typically rescales a result (e.g. citation metric results) by a factor of citation potential within a disciplinary field or articles of a similar age. This approach makes metrics more comparable across years or disciplines and can help the user understand what might be genuine citation performance, rather than citation behaviour within a disciplinary culture.

This Youtube video on how SNIP and SJR (below), field-weighted citation metrics provided by Scopus are calculated can be useful to explain this concept. Field normalising metrics yourself is very difficult and time-consuming, so it is not recommended. See the Bibliometrics checklist matrix for normalised metrics for you to use where appropriate.

The University of Southampton subscribes to SciVal which is a benchmarking tool linked to Scopus.

  • SciVal can help analyse and compare citation and publication trends between individuals, research groups, Institutions & organisations, countries, and sub-disciplines.
  • This can be useful for identifying potential collaborators and hot topic areas that your research could benefit from expanding into.
  • It is advised that you take part in one of the Library’s training offers to understand how to use these resources appropriately.
  • If you would like to know more about using SciVal contact ePrints@soton.ac.uk.

Use of Metrics for Recruitment, Promotion, etc.

The following processes are likely to involve some sort of academic assessment, to which the responsible research metrics policy should be applied: Probation, Appraisal, Recruitment & Promotion.

An Assessor must review the responsible research metrics policy in detail and be able to demonstrate that on balance the metrics used for assessment are more suitable than expert judgement alone. The use of metrics to assess research performance should be considered in context. Where possible, we recommend including a reference to the source data, for example, a web link and the date this information was accessed.

It is highly recommended that assessors ignore any reference an applicant makes to journal impact factors or journal rankings when assessing the quality of a piece of research. An applicant should not be penalised for including journal-based metrics, but if they fail to qualify why the research is important/impactful/rigorous etc. journal base metrics should not be used as a substitute for this missing information – to do so would go against the responsible research metrics policy. The H-index is also commonly supplied by applicants within some disciplines, this index is biased and lacks precision, and while some biases are diminished when comparing like for like this index can still have a negative impact on individuals with protected characteristics.

If metrics will be used during an assessment a suite of metrics capable of verifying the results of each other should be used where possible alongside expert judgement. Please ensure that the use of metrics is reassessed for each job role or promotion review cycle.

When preparing an application, CV, or cover letter, please be advised that there is no guarantee the metrics supplied will be taken into consideration. Where metrics cannot be independently verified; if they are not considered robust or appropriate; or if they cannot be meaningfully compared in a consistent manner across all candidates they will not be taken into consideration. However, you will not be penalised for supplying metrics. Read the job description, competency framework, or promotion guidance website carefully to understand what information can be included in support of an application.

If a metric is being used, the assessors are asked to abide by the Responsible Research Metrics Policy and to consider protected characteristics when analysing the metrics. Please see our Responsible Research Metrics Policy for more information.
If an advert or assessor asks for applicants to provide information about the impact or quality of their work, it is advised that you provide written context alongside quantitative evidence or other qualitative supporting evidence. If on balance, you wish to supply metrics, you should provide contextual information to support the findings. Be aware, if the chosen metric is inappropriate it will not be taken into consideration in line with the Responsible Research Metrics Policy.

If you wish to supply metrics in support of your application, there is helpful information in the ‘Responsible Metrics Matrix'  in the Guidance section above.