Tag Archives: significance testing

Reflections on p-values and confidence intervals

When we run a statistical test, we almost always obtain a p-value. Many statistical tests will also generate a confidence interval. Unfortunately, many scientists report the p-value and ignore the confidence interval. As pointed by Rothman (2016) and the American Statistical Association, relying on p-values forces a false dichotomy between results that are significant and those that are non-significant. This

Read more

False-positive findings and how to minimize them

As scientists we collect data and look for patterns or differences. Because populations display variation and we are unable to collect data from all members of a population, statistical results will always possess a level of uncertainty. For example, it is common to set alpha to 0.05. This implies that if there is no difference or effect, there is a

Read more