Predicting the replicability of social and behavioural science claims in a crisis: The COVID-19 Preprint Replication Project

Abstract

Replications are important for assessing the reliability of published findings. However, they are costly, and it is infeasible to replicate everything. Accurate, fast, lower-cost alternatives such as eliciting predictions could accelerate assessment for rapid policy implementation in a crisis. We elicited judgments from participants on 100 claims from preprints about an emerging area of research (COVID-19 pandemic) using an interactive structured elicitation protocol, and we conducted 29 new high-powered replications. After interacting with their peers, participant groups with lower task expertise (‘beginners’) updated their estimates and confidence in their judgements significantly more than groups with greater task expertise (‘experienced’). For experienced individuals, the average accuracy was 0.57 after interaction, and they correctly classified 61 percent of claims; beginners' average accuracy was 0.58, correctly classifying 69 percent of claims. The difference in accuracy between groups was not statistically significant, and their judgments on the full set of claims were correlated (r=.48). These results suggest that both beginners and more experienced participants using a structured process have some ability to make better-than-chance predictions about the reliability of ‘fast science’ under conditions of high uncertainty. However, given the importance of such assessments for making evidence-based critical decisions in a crisis, more research is required to understand who the right experts in forecasting replicability are and how their judgements ought to be elicited.

Publication
Nature Human Behaviour, forthcoming

Related