Tag Archives: statistics

False-positive findings and how to minimize them

As scientists we collect data and look for patterns or differences. Because populations display variation and we are unable to collect data from all members of a population, statistical results will always possess a level of uncertainty. For example, it is common to set alpha to 0.05. This implies that if there is no difference or effect, there is a

Read more

Implying “there’s a trend to statistical significance” is not trendy.

When a p value that fails to reach a threshold is reported, investigators sometimes imply there is a “trend towards statistical significance”. This interpretation expresses the view that if more subjects had been tested, the p value would have become more significant. Epidemiologists Wood and colleagues examined the probability of how the p value of a treatment effect changes when

Read more

Why most published findings are false: the effect of p-hacking

In our previous post, we revisited the Ioannidis argument on Why most published research findings are false. Other factors such as p-hacking can also increase the chance of reporting a false-positive result. Such results are associated with a p-value deemed to be statistically significant, but the underlying hypothesis is in fact false. Researcher degrees of freedom As scientists, we have

Read more

Why most published findings are false: revisiting the Ioannidis argument

It has been more than a decade that Ioannidis published his paper entiled Why most published research findings are false. Forstmeier et al. (2016) recently revisited the Ioannidis argument, and I thought it worthwhile to prepare a blog post on the topic to cement my understanding. Looking for a novel effect Let’s consider 1000 hypotheses we might want to test.

Read more

Calculating sample size using precision for planning

Most sample size calculations for independent or paired samples are performed based on power to detect an effect of a certain size, assuming there’s no effect. Instead, Cumming and Calin-Jageman recommend that readers plan studies to detect precise effects. The 95% confidence interval (CI) indicates precision about effects. Therefore, it is possible to plan studies to detect narrow 95% CIs

Read more

The likelihood ratio test: relevance and application

Suppose you conduct a study to compare an outcome between two independent groups of people, but you realised later that the groups were unexpectedly different at baseline. This difference might affect how you interpret the findings. For example, you measured muscle stiffness in people with stroke and in healthy people. At the end of the study, you realised that on

Read more
« Older Entries