Exploring the metrics and incentives of scientific productivity

The pressure to publish and current incentives that reward highly-cited discoveries leads to research findings that are not reproducible and inadvertently results in the natural selection of bad science. It is difficult to encourage scientists to take effort in conducting reproducible and rigorous research without better incentives. What kinds of metrics and incentives might reward scientists for conducting sound science?
Lindner and colleagues explored the metrics and incentives of scientific productivity in a sample of 110 Phase II-IV clinical trials in rheumatoid arthritis. Results reported by the trials (i.e. positive statistically significant results, or negative non-statistically significant results) were extracted. For each trial, investigators determined the amount of research activity (i.e. number of publications on the same topic) during the 2 years before and 2 years after the trial. Additionally, the number of Scopus citations within 2 years of publication was included in the analysis. What did they find?
A large proportion of trials failed to confirm the hypothesis and reported negative, non-statistically significant results. Specifically, 58 (53%) had positive outcomes and 52 (47%) had negative outcomes. 93% of trials with positive outcomes were published, but only 62% of trials with negative outcomes were published. This is consistent with publication bias or the file drawer problem.
When the research topics of trials were analysed and linked with conclusions from those trials, investigators found that research activity did not decrease in topics that published trials with negative outcomes. However, there were already very few trials in such topics.
The amount of research activity was very different across different topics. Some topics were widely investigated (e.g. more than 1000 papers on tumor necrosis factor and rheumatoid arthritis were published each year) but other topics received little attention (e.g. less than 10 papers on A3 adenosine receptors and rheumatoid arthritis were published). As the amount of research activity increased, the variety of topics examined decreased. For example, many topics were examined in only 1 or 2 trials, but other well-established topics each being examined in numerous trials.
Furthermore, as the amount of research activity increased, citation numbers increased. Publications that reported positive, statistically significant results were cited much more often than those reporting negative results.
Summary
As long as researchers are rewarded for publishing large numbers of statistically significant and highly cited papers, it will be difficult to encourage efforts to increase rigor and reproducibility. Current metrics and incentives in research are not able to reflect how scientific investigation in a field progresses, nor reward researchers for conducting sound science. The study by Linder et al. (2018) suggests that measures such as the number and variety of research topics investigated, publication success rates, and number of publications relative to research topics may be useful metrics to quantify innovation and encourage good research practice, even if results are not statistically significant. It would be useful to develop and validate such metrics to change the selection pressure and reward scientists for publishing valid, reproducible results.
Reference
Lindner MD, Torralba KD, Khan NA (2018) Scientific productivity: An exploratory study of metrics and incentives. PLoS ONE 13(4): e0195321.