Journal impact factor = reliable research?
Academics and scientists, especially those in administrative positions looking to rate their peers/staff/candidates, have traditionally used impact factors as an indicator of quality and influence. Only the best, most trustworthy and innovative research gets published in journals with high impact factors. At least, that is what they would like us to believe…
What is an impact factor? The impact factor of an academic journal is a measure reflecting the yearly average number of citations to recent articles published in that journal. As such, the impact factor has been used to judge the relative importance of a journal in its field.
However, impact factors are not without controversy. In fact, the San Francisco Declaration on Research Assesssment, drafted in 2013, states that journal impact factors should not be used to assess a researcher’s work, including when hiring, promoting or awarding grant funds.
Despite the backlash against impact factors, many researchers continue to track them. Just last week, a colleague told me they were excited one of their papers was likely going to be accepted by a good journal: ‘it has an impact factor of 7.2’.
Other researchers refrain from using the words
impact factor, but still hold a clear journal hierarchy in their head. What is this hierarchy based on? As far as I can tell, it mirrors journal impact factors. And this
subjective hierarchy is transmitted to students and young investigators.
Regardless of whether we focus on journal impact factors or
subjective hierarchy, do higher ranking journals publish higher quality, more reliable research?
Prestige, quality and reliability
In a recent paper, Brembs (2018) investigates this very question. Citing the fierce competition that exists amongst researchers for fellowships, tenure track positions and grant funds, Brembs argues that, to distinguish oneself from their peers, there is pressure to publish in journals with high impact factors. Journals now have to distinguish between the “ground-breaking and the to-good-to-be-true data, submitted by desperate scientists, who face unemployment and/or lab closure without the next high-profile publication”.
In his article, Brembs (2018) considers various lines of evidence that account for, or are not affected by, the higher readership and scrutiny of papers published in higher ranking journals. Specifically, this evidence relates to crystallographic quality, effect sizes in gene-association studies, statistical power (neuroscience and psychology), experimental design in in Vivo animal experiments, errors in genomics/cognitive neuroscience/psychology, criteria for evidence-based medicine, reliability metrics (psychology) and reproducibility.
What did he find? In the majority of cases, journals with high impact factors did not publish higher quality, more reliable research than journals with lower impact factors. In fact, journals with higher impact factors were in many cases significantly worse (e.g. crystallographic quality, effect-size in gene association studies, statistical power in cognitive neuroscience). That is, journals with higher impact factors published studies with poorer crystallographic quality, overestimated effect sizes in gene association studies, and lower statistical power in cognitive neuroscience.
Contrary to what might be expected, not a single study concluded that journals with high impact factors publish the most sound experiments.
Based on the current available data, journals with high impact factors fare no better than journals with lower impact factors, with mounting evidence that high-impact journals often fare worse when it comes to metrics of quality and reliability.
Keep this in mind the next time you read an article from one of the top journals, or when someone on an assessment panel remarks how impressive a candidate is because they have managed to publish several papers in high ranking journals.
Brembs B (2018). Prestigious science journals struggle to reach even average reliability. Frontiers in Human Neuroscience. 12: 37.