Implying “there’s a trend to statistical significance” is not trendy.

When a p value that fails to reach a threshold is reported, investigators sometimes imply there is a “trend towards statistical significance”. This interpretation expresses the view that if more subjects had been tested, the p value would have become more significant. Epidemiologists Wood and colleagues examined the probability of how the p value of a treatment effect changes when

Read more

Why most published findings are false: the effect of p-hacking

In our previous post, we revisited the Ioannidis argument on Why most published research findings are false. Other factors such as p-hacking can also increase the chance of reporting a false-positive result. Such results are associated with a p-value deemed to be statistically significant, but the underlying hypothesis is in fact false. Researcher degrees of freedom As scientists, we have

Read more

Why most published findings are false: revisiting the Ioannidis argument

It has been more than a decade that Ioannidis published his paper entiled Why most published research findings are false. Forstmeier et al. (2016) recently revisited the Ioannidis argument, and I thought it worthwhile to prepare a blog post on the topic to cement my understanding. Looking for a novel effect Let’s consider 1000 hypotheses we might want to test.

Read more

Calculating sample size using precision for planning

Most sample size calculations for independent or paired samples are performed based on power to detect an effect of a certain size, assuming there’s no effect. Instead, Cumming and Calin-Jageman recommend that readers plan studies to detect precise effects. The 95% confidence interval (CI) indicates precision about effects. Therefore, it is possible to plan studies to detect narrow 95% CIs

Read more
« Older Entries