Skip to Main Content

Bibliometrics - Assessing Research: Responsible Metrics

Contact Us

For help or advice with any research metrics query please email eprints@soton.ac.uk.

Key principles

Signatory of DORAThis page highlights the key principles for using Metrics responsible. The University of Southampton Responsible Research Metrics Policy sets out full details of terms for using appropriate research metrics responsibly. If you are assessing research you should familiarise yourself with the policy and the guidance.

Different disciplines will have different practices but processes should be tailored to the needs of each assessment and 5 key principles should be observed in every assessment - Transparency, Equality, Appropriateness, Reproducible, Continuous Reassessment. 

If you have read the policy and are unsure which metrics to use please visit the Guidance page, or contact us at eprints@soton.ac.uk 

Transparency

Have a defined question you want to answer before selecting the metrics you will use and share that with end users

Provide an explanation in clear, simple language to ensure end-users understand the data used, its reliability and its limitations.

Think what, where and when;

  • What metrics you are using?
  • Where did you get your data from (sources)?
  • When did you make your assessment - your assessment will be a snapshot in time -  the data sources you use update over time.

 

Equality

Metrics should be normalised. If you cannot normalise the metrics, then you should only compare like for like i.e. make comparisons within a single discipline or a limited time period.

Consistency is important. If you can't apply the same method across the research being assessed, you should not use that method.

Be aware that Individuals with protected characteristics may not be directly comparable due to time out of work or length of service. E.g. you should not compare an early career researcher with a researcher with 20 years experience, unless you have normalised the comparison to a suitable time frame.

 

What do we mean by normalised?

Normalised is another word for field-weighting or field-normalisation, this is where a result (e.g. citation metric results) is rescaled by a factor of citation potential within a disciplinary field or articles of a similar age. This approach makes metrics more comparable across years or disciplines and can help the user understand what might be genuine citation performance.

We don't recommend you try to Field normalise metrics yourself as it is very difficult and time-consuming, instead you should use existing resources such as Scival. 

Appropriateness

All metrics should be tailored to the question being asked. Use metrics for their intended purpose. For example, you must not use a journal metric to infer the quality for an individual output and must never use journal impact factors and other journal rankings for assessing a person or individual outputs. Journal Impact factors should be used as a comparison tool or serve as a tie breaker when an author is looking at which journal would be more appropriate to publish in.

Metrics should only be used when it is necessary, and be used in conjunction with expert testimony rather than in isolation.

When assessing a person, metrics must not be used as a sole source of information. This is especially true of employment status, but also personal reputation in a formal or informal context.

 

Reproducibile

Anyone should be able to reproduce your results by using the explanation you have provided. It is recommended to use more than one metric to verify results.

 

Continually Reassess

Continually assess commonly used metrics, especially concerning appropriateness and equality. If a metric is no longer fit for purpose, it should not be used.