False-positive findings and how to minimize them

As scientists we collect data and look for patterns or differences. Because populations display variation and we are unable to collect data from all members of a population, statistical results will always possess a level of uncertainty. For example, it is common to set alpha to 0.05. This implies that if there is no difference or effect, there is a

Read more

The Critical thinking and Appraisal Resource library (CARL) to understand and assess treatment claims

Every day, we are confronted by claims about effects of treatments, many of which are not supported by evidence and are misleading. It is easy to overestimate the benefits of treatments and to underestimate their potential risks, without knowing how to accurately assess claims about treatments. To address these problems, Castle and colleagues developed the Critical thinking and Appraisal Resource

Read more

Implying “there’s a trend to statistical significance” is not trendy.

When a p value that fails to reach a threshold is reported, investigators sometimes imply there is a “trend towards statistical significance”. This interpretation expresses the view that if more subjects had been tested, the p value would have become more significant. Epidemiologists Wood and colleagues examined the probability of how the p value of a treatment effect changes when

Read more

Why most published findings are false: the effect of p-hacking

In our previous post, we revisited the Ioannidis argument on Why most published research findings are false. Other factors such as p-hacking can also increase the chance of reporting a false-positive result. Such results are associated with a p-value deemed to be statistically significant, but the underlying hypothesis is in fact false. Researcher degrees of freedom As scientists, we have

Read more

Why most published findings are false: revisiting the Ioannidis argument

It has been more than a decade that Ioannidis published his paper entiled Why most published research findings are false. Forstmeier et al. (2016) recently revisited the Ioannidis argument, and I thought it worthwhile to prepare a blog post on the topic to cement my understanding. Looking for a novel effect Let’s consider 1000 hypotheses we might want to test.

Read more

Calculating sample size using precision for planning

Most sample size calculations for independent or paired samples are performed based on power to detect an effect of a certain size, assuming there’s no effect. Instead, Cumming and Calin-Jageman recommend that readers plan studies to detect precise effects. The 95% confidence interval (CI) indicates precision about effects. Therefore, it is possible to plan studies to detect narrow 95% CIs

Read more
« Older Entries