Numbers don’t lie: False results and overestimated effects

Science is a process. Ideas are spawned, experiments are designed, data are collected, results are presented and conclusions are drawn. As everything in life, sometimes the ideas scientists have are groundbreaking, while other times the ideas are dead ends.

There is a long-standing pattern that research results need to be positive in order to be published. This is problematic is many ways.

As Thomas Edison stated:

“I have not failed 10,000 times—I've successfully found 10,000 ways that will not work.”

In modern day science, the majority of these 10,000 ways would not be made public, they would not be published. Not because the findings are not important or useful, but because journals, editors and reviewers are biased towards publishing positive, statistically significant results.

Depending on the research area and the research question, genuine effects may be rare. If all well-conducted studies were in fact reported, there should be a plethora of papers reporting negative results and only a handful of successes.

The majority of scientists still rely on statistical significance to make their claims of success and failure, of positive findings and negative findings. However, given that many investigated effects are small, and that studies are often statistically underpowered, the effects associated with statistically significant results will necessarily be overestimated. Thus, the bias towards publishing positive, statistically significant results further increases the divide: negative results are not being published and positive results are being published with effects that are overestimated.

In science, the pressure to find positive, statistically significant results -an evolutionary pressure of sorts- governs what research findings are published. Given these pressures, is it any wonder that many scientists unwittingly adopt research practices that increase their chances of finding statistically significant results? And when these practices are not sufficient, some will resort to spin in hopes this will be enough to fool others and get their results published.

I am aware that this view of science seems bleak and pessimistic. Most researchers will feel that the above factors and pressures do not apply to them; they are aware of them and rise above them.

To the surprise of most people, I am optimistic for the future. But the numbers don’t lie. Many published effects are in fact false. Many published effects are in fact overestimated. Researchers need to understand the logic that underpins these statements. Doing so will make them better scientists, both in how they conduct their own research and how they assess the research of others.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s