Publications des scientifiques de l'IRD

Guerin Joris, Delmas K., Ferreira R., Guiochet J. (2023). Out-of-distribution detection is not all you need. In : Williams B. (ed.), Chen Y. (ed.), Neville J. (ed.). Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence. Washington : AAAI Press, 14829-14837. AAAI Conference on Artificial Intelligence ; Conference on Innovative Applications of Artificial Intelligence ; Symposium on Educational Advances in Artificial Intelligence, 37. ; 35. ; 13., Washington (USA), 2023/02/07-14. ISBN 978-1-57735-880-0.

Titre du document
Out-of-distribution detection is not all you need
Année de publication
2023
Type de document
Partie d'ouvrage
Auteurs
Guerin Joris, Delmas K., Ferreira R., Guiochet J.
In
Williams B. (ed.), Chen Y. (ed.), Neville J. (ed.), Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence
Source
Washington : AAAI Press, 2023, 14829-14837 ISBN 978-1-57735-880-0
Colloque
AAAI Conference on Artificial Intelligence ; Conference on Innovative Applications of Artificial Intelligence ; Symposium on Educational Advances in Artificial Intelligence, 37. ; 35. ; 13., Washington (USA), 2023/02/07-14
The usage of deep neural networks in safety-critical systems is limited by our ability to guarantee their correct behavior. Runtime monitors are components aiming to identify unsafe predictions and discard them before they can lead to catastrophic consequences. Several recent works on runtime monitoring have focused on out-of-distribution (OOD) detection, i.e., identifying inputs that are different from the training data. In this work, we argue that OOD detection is not a well-suited framework to design efficient runtime monitors and that it is more relevant to evaluate monitors based on their ability to discard incorrect predictions. We call this setting out-of-model-scope detection and discuss the conceptual differences with OOD. We also conduct extensive experiments on popular datasets from the literature to show that studying monitors in the OOD setting can be misleading: 1. very good OOD results can give a false impression of safety, 2. comparison under the OOD setting does not allow identifying the best monitor to detect errors. Finally, we also show that removing erroneous training data samples helps to train better monitors.
Plan de classement
Informatique [122]
Localisation
Fonds IRD [F B010090489]
Identifiant IRD
fdi:010090489
Contact