Tag Archives: reproducibility

Reproducibility: The Transparency and Openness Promotion Guidelines

In a previous post, we profiled the EQUATOR network and reporting guidelines. These guidelines stress transparency in reporting study methods, and most are relevant to study designs in clinical research, such as randomised controlled trials, epidemiological studies and systematic reviews. Taking a different vein, the Transparency and Openness Promotion (TOP) Guidelines were developed to enhance transparency in reporting of study

Read more

The Open Science Framework, Part 2: adding files and contributors

In the previous post we learned how to create a project folder and pre-register a study protocol on the Open Science Framework (OSF). The OSF also serves as a good data repository to store data, and users can add contributors (i.e. collaborators) to projects. How is this done? To add files to a project, navigate to the OSF website, sign

Read more

The Open Science Framework, Part 1: pre-registering a study protocol

Pre-registration of study protocols increases research transparency by providing a time-stamped record of experimental or analysis decisions before studies are conducted. Protocol pre-registration is now mandatory or strongly encouraged for clinical trials, and is increasingly encouraged for basic science and pre-clinical research (e.g. see the Transparency and Openness Promotion Guidelines). The Open Science Framework (OSF) is an open source software

Read more

Reproducible research practices are underused in systematic reviews of biomedical interventions

Researchers are increasingly encouraged to implement reproducible research practices in their work. These practices include describing the data collected and used for analysis in detail, clearly reporting the analysis method and results, and sharing the dataset and statistical or analysis code. To determine how well reproducible research practices are implemented, Page and colleagues (2017) investigated their implementation in systematic reviews

Read more

Why most published findings are false: the effect of p-hacking

In our previous post, we revisited the Ioannidis argument on Why most published research findings are false. Other factors such as p-hacking can also increase the chance of reporting a false-positive result. Such results are associated with a p-value deemed to be statistically significant, but the underlying hypothesis is in fact false. Researcher degrees of freedom As scientists, we have

Read more

Why most published findings are false: revisiting the Ioannidis argument

It has been more than a decade that Ioannidis published his paper entiled Why most published research findings are false. Forstmeier et al. (2016) recently revisited the Ioannidis argument, and I thought it worthwhile to prepare a blog post on the topic to cement my understanding. Looking for a novel effect Let’s consider 1000 hypotheses we might want to test.

Read more
« Older Entries