Reflections on p-values and confidence intervals

When we run a statistical test, we almost always obtain a p-value. Many statistical tests will also generate a confidence interval. Unfortunately, many scientists report the p-value and ignore the confidence interval.
As pointed by Rothman (2016) and the American Statistical Association, relying on p-values forces a false dichotomy between results that are significant and those that are non-significant. This practice reduces countless hours of work collecting and analysing data to a simple yea or nay.
Why are p-values more difficult to interpret than confidence intervals?
P-values are computed by combining two separate values: the size of an effect and the precision of that effect. On the other hand, confidence intervals show these two values separately.
The center of the confidence interval indicates the observed effect size. For example, if a new drug reduced resting heart rate by 8 [4 to 12] beats per minute (bpm) (mean [95% confidence interval]), we know that the average effect of the medication across study participants was to reduce resting heart rate by 8 bpm.
“…it is convenient to draw the line at about the level at which we can say: ‘Either there is something in the treatment, or a coincidence has occurred such as does not occur more than once in twenty trials’. If one in twenty does not seem high enough odds, we may, if we prefer it, draw the line at one in fifty (the 2 per cent point) or one in a hundred (the 1 per cent point).”
Ronald Fisher (1935)
The width of the confidence interval tells us about the precision of the observed effect. Using the example above, we can be 95% sure that the true population mean would be within our 95% confidence interval; i.e. between 4 bpm and 12 bpm.
P-values combine the size of the effect and the certainty of the effect to answer the question: if there was no effect in the overall population (i.e. null hypothesis is true), what is the chance of observing an effect as large or larger than the one we found?
Summary
It is hard to escape p-values. However, if we feel it necessary to report them, we should consider also reporting the associated confidence intervals.
Interpreting and discussing p-values is difficult. Interpreting confidence intervals is simple and informative. It acknowledges that our results are estimates, not yea or nay statements about reality.