A standardized citation metric
Scientific impact is a difficult thing to measure. Nevertheless, a variety of indexes and metrics have been created that attempt to quantify it.
Many of these measures are based on citations: the number of times a scientific paper is referenced by another scientific paper. Unfortunately, it is not entirely clear what these citation-based metrics actually reflect, nor is it clear whether they should be used (or misused) as a measure of impact or excellence .
There are many citation metrics and it is not clear which metric should be used. Researchers include such metrics in their curriculum vitae, but they are often inaccurate and not professionally calculated. Moreover, metrics do not systematically account for self-citation (incidental or strategic) or allow for comparisons between fields with different citation densities.
Ioannidis to the rescue: A composite citation metric
John Ioannidis is an influential researcher in the field of of open science and research reproducibility. Recently, he and his colleagues published a paper in PLOS Biology that tries to overcome many of the problems with individual citation metrics . Specifically, they provide a comprehensive database of over 100,000 of the most-cited scientists across all scientific fields, as well normative values for over 6 million researchers who have authored at least 5 scientific papers.
Researchers are ranked based on a composite indicator that considers six citation metrics . See below for a description of these metrics
One version of the database reflects data over 22 years (1996 to 2018) and provides a measure of long-term performance. Another version of the database reflects citations from 2017 and provides a measure of performance in that single recent year. This latter database may be better suited for junior scientists who have not had as much time to accrue citations.
Calculating your own composite citation score
In their paper, Ioannidis et al.  provide a simple formula that we can use to calculate our own composite citation score. Why would we do this? Well, if you are like me and you are not in the top 100,000 scientists that made it into the downloadable database, you can calculate your score and determine how well you rank in your field.
Interpreting the composite citation metric: an example
Rather than calculating my own (humiliating) composite citation score, I thought it would be interesting to compare a few of the scientists from the research institute where I work who actually made it to the prestigious top 100,000 list.
|Weickert, C||99,276||7, 705||50||18.829||0||1,347||3,730||3.3969|
- tc: total citations
- h: Hirsch h-index
- h2: coauthorship-adjusted Schreiber hm-index
- ncp: number of citation to papers as single author
- ncsf: number of citation to papers as single or first author
- ncsfl: number of citations to papers as single, first or last author
The database also includes an estimate of the rate of self-citation. Among the top 100,000 scientists, the median percentage of self-citations was 12.7% [IQR: 8.6% to 17.7%]. If we again look at the scientists from my own research institute, most are within this rage.
Ioannidis et al.  make the point that is that care should be taken when comparing scientists across very different fields (e.g. historical studies, physics & astronomy, clinical medicine). However, given that my example researchers all work in a medical research institute, we see that their identified scientific fields are sensible and, for the most part, comparable. As such, their composite citation score may serve as an indicator of their scientific impact and excellence.
|Name||field 1||field 2||field 3|
|Gandevia, S||Physiology||Neurology & Neurosurgery||Biomedical Research|
|Lord, S||Geriatrics||Gerontology||Clinical Medicine|
|Anstey, K||Geriatrics||Experimental Psychology||Clinical Medicine|
|Herbert R||Rehabilitation||Orthopedics||Clinical Medicine|
|Schofield P||Neurology & Neurosurgery||Psychiatry||Clinical Medicine|
|Taylor, J||Physiology||Neurology & Neurosurgery||Biomedical Research|
|Weickert, C||Psychiatry||Neurology & Neurosurgery||Clinical Medicine|
Quantifying scientific impact, excellence and quality will always be difficult. However, Ioannidis et al.  have provided us with a rich database and a standardized citation metric that accounts for issues of self-citation.
Interestingly, rankings are generally similar on the long-term and one year version of the database for more senior scientists, but they do change somewhat for more junior scientists. Moreover, although the current version of the database only includes data up to 2017, it takes dramatic changes in citations and citation metric scores to cause a large jump in rankings. Thus, it is proposed that the the current database and the information it provides will be valid for a few years.
Will this standardized citation metric be the one to rule them all? Only time will tell.
 Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I (2015). Bibliometrics: The Leiden Manifesto for research metrics.
 Ioannidis JPA, Baas J, Klavans R, Boyack KW (2019). A standardized citation metrics author database annotated for scientific field. PLoS Biol. 17:e3000384.
 Ioannidis JP, Klavans R, Boyack KW (2016). Multiple citation indicators and their composite across scientific disciplines. PLoS Biol. 14:e1002501.