%0 Conference Proceedings %9 ACTI : Communications avec actes dans un congrès international %A Machicao, J. %A Ben Abbes, A. %A Meneguzzi, L. %A Pizzigatti Corrêa, P. %A Specht, A. %A David, R. %A Subsol, G. %A Vellenich, D.F. %A Devillers, Rodolphe %A Stall, S. %A Mouquet, N. %A Chaumont, M. %A Berti-Equille, Laure %A Mouillot, D. %T Reproducing Deep Learning experiments : common challenges and recommendations for improvement [poster] %D 2022 %L fdi:010087181 %G ENG %I %P 1 multigr. %R 10.5281/zenodo.6587694 %U https://www.documentation.ird.fr/hor/fdi:010087181 %> https://horizon.documentation.ird.fr/exl-doc/pleins_textes/2023-07/010087181.pdf %W Horizon (IRD) %X In computer science, there are more and more efforts to improve reproducibility. However, it is still difficult to reproduce the experiments of other scientists, and even more difficult when it comes to Deep Learning (DL). Making a DL research experiment reproducible requires a lot of work to document, verify, and make the system usable. These challenges are increased by the inherent complexity of DL, such as the number of (hyper)parameters, the huge amount of data, the versioning of the learning model, among others. Based on the reproduction of three DL case studies on real-world tasks, such as poverty estimation from remote sensing imagery, we identified common problems in the reproduction. Therefore, we proposed a set of recommendations ('fixes') to overcome these issues that a researcher may encounter in order to improve reproducibility and replicability and reduce the likelihood of wasted effort. These strategies can be used as "swiss army knife" to move from DL to more general areas as they are organized as (i) the quality of the dataset (and associated metadata), (ii) the Deep Learning method, (iii) the implementation, and the infrastructure used. %B RDA.Research Data Alliance %8 2022/06/20-23 %$ 122 ; 021