%0 Journal Article %9 ACL : Articles dans des revues avec comité de lecture répertoriées par l'AERES %A Christie, A. P. %A Abecasis, D. %A Adjeroud, Mehdi %A Alonso, J. C. %A et al. %T Quantifying and addressing the prevalence and bias of study designs in the environmental and social sciences %D 2020 %L fdi:010080491 %G ENG %J Nature Communications %@ 2041-1723 %M ISI:000600150800017 %N 1 %P 6377 [11 ] %R 10.1038/s41467-020-20142-y %U https://www.documentation.ird.fr/hor/fdi:010080491 %> https://horizon.documentation.ird.fr/exl-doc/pleins_textes/divers21-01/010080491.pdf %V 11 %W Horizon (IRD) %X Building trust in science and evidence-based decision-making depends heavily on the credibility of studies and their findings. Researchers employ many different study designs that vary in their risk of bias to evaluate the true effect of interventions or impacts. Here, we empirically quantify, on a large scale, the prevalence of different study designs and the magnitude of bias in their estimates. Randomised designs and controlled observational designs with pre-intervention sampling were used by just 23% of intervention studies in biodiversity conservation, and 36% of intervention studies in social science. We demonstrate, through pairwise within-study comparisons across 49 environmental datasets, that these types of designs usually give less biased estimates than simpler observational designs. We propose a model-based approach to combine study estimates that may suffer from different levels of study design bias, discuss the implications for evidence synthesis, and how to facilitate the use of more credible study designs. Randomised controlled experiments are the gold standard for scientific inference, but environmental and social scientists often rely on different study designs. Here the authors analyse the use of six common study designs in the fields of biodiversity conservation and social intervention, and quantify the biases in their estimates. %$ 020 ; 021