Papers pile

Must read: Emanuel Wyler recommends a paper on a statistical fallacy

Measurements with small sample sizes can become ‘statistically significant’ due to the combination of small sample numbers and measurement error. This produces effects that are not present in the system under scrutiny.

Have you recently read an important paper from your field that you’d like to recommend to your colleagues? Send us the reference, along with a couple of lines about what grabbed your interest, and we’ll pass it along in Insights.

Emanuel Wyler from Markus Landthaler's lab recommends:

  • Eric Loken, Andrew Gelman (2017): “Measurement error and the replication crisis.” Science 355(6325), pp. 584-585. doi:10.1126/science.aal3618

“In empirical research, which most of us at the MDC are doing, experiments are performed to measure biological parameters. These measurements should inform us about underlying biological processes, but are always blurred by noise which occurs during measurements. A common view is that in (basic) research, studies are carried out with a small number of samples, to explore and demonstrate previously unknown effects. Since they are then shown to be ‘statistically significant’ despite measurement noise, the general assumption is that the observation becomes more visible when it would be performed with a large number of samples. Loken and Gelman show with a simulation embedded in their essay that the ‘despite’ is actually a ‘because’: the combination of small sample numbers and measurement error in fact produces effects that are not present in the system under scrutiny. In combination with an academic system that favors novelty over reliability and reproducibility, this leads to the so-called ‘replication crisis’: an ever-increasing body of scientific knowledge in which many observations may plainly be wrong.”

Article image: Papers pile by Niklas Bildhauer. This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.