The natural selection of bad science: a second perspective

Note. This is a semi-independent summary of the same paper that Marty wrote about in the post here. I wrote it to present a different perspective on an otherwise fascinating idea.

There is an increasing awareness and deep concern that most scientific findings are not reliable or valid. Many key researchers and research groups have called for (1) greater transparency (eg. data and code sharing) and integrity in research practice, (2) improvements in research design conduct and analysis, and (3) better training and education of scientists in the workforce. Many people frequently underscore and emphasise the primary importance of improving the reproducibility of scientific findings. Is the need for improving reproducibility sufficient to remedy the crisis of confidence in science?

In a fascinating paper, Smaldino & McElreath (2016) argue that calling for scientific reproducibility at the level of practising scientists is not sufficient to fix the crisis of confidence in science. The authors posit that system and cultural determinants at the institutional level, where scientists are rewarded and advance based on high publication numbers and citation indices, constitutes such a strong selection pressure (favouring dodgy research practices to get quick publications) that in a Darwinian sense, the pressure to publish selects against research practices that are required to obtain reproducible and valid scientific findings. Ie. current incentives favour the natural selection of bad science.

The plausibility of the idea that the pressure to publish inherently favours poor research practice is not new, but Smaldino & McElreath strengthen this argument by simulating data from a theoretical model of how practising scientists will behave and how the conduct of science will evolve, given current incentivisation of high publication numbers and citation indices. Data from their simulations show that selection for high output results in poorer methods and increasingly high false discovery rates. Importantly, (1) replication slows but does not stop the process of methodological deterioration, and (2) in the model, scientists were assumed to have utmost integrity – they never cheat. Instead, research practice evolves due to its consequences on hiring and retention, primarily through successful publication.

There are many important implications from their findings (some, frankly discouraging). The authors contend that without institutional level changes to incentivise good and valid science, instead of publication number, current selection pressures will encourage further poor research practice, whether scientists have complete research integrity at best or whether they intentionally commit questionable research practices at worst.

Where does this leave us? In a sense, scientists are under a duty of care to funders and lay-persons to conduct science with integrity, hopefully in a community who can call us out on our biases when we can’t see them. But even if we were not not under obligation or accountable, how could we really understand what we study without good research practice?

Reference

Smaldino PE, McElreath R (2016). The natural selection of bad science. arXiv:1605.09511.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s