Publications des scientifiques de l'IRD

Berti-Equille Laure. (2019). Reinforcement learning for data preparation with active reward learning. In : El Yacoubi S. (ed.), Bagnoli F. (ed.), Pacini G. (ed.). Internet science. Cham : Springer, p. 121-132. (Lecture Notes in Computer Science ; 11938). International Conference on Internet Science : INSCI 2019, 6., Perpignan (FRA), 2019/12/02-05. ISBN 978-3-03-034769-7. ISSN 0302-9743.

Titre du document
Reinforcement learning for data preparation with active reward learning
Année de publication
2019
Type de document
Partie d'ouvrage
Auteurs
Berti-Equille Laure
In
El Yacoubi S. (ed.), Bagnoli F. (ed.), Pacini G. (ed.) Internet science
Source
Cham : Springer, 2019, p. 121-132 (Lecture Notes in Computer Science ; 11938). ISBN 978-3-03-034769-7 ISSN 0302-9743
Colloque
International Conference on Internet Science : INSCI 2019, 6., Perpignan (FRA), 2019/12/02-05
Data cleaning and data preparation have been long-standing challenges in data science to avoid incorrect results, biases, and misleading conclusions obtained from "dirty" data. For a given dataset and data analytics task, a plethora of data preprocessing techniques and alternative data cleaning strategies are available, but they may lead to dramatically different outputs with unequal result quality performances. For adequate data preparation, the users generally do not know how to start with or which methods to use. Most current work can be classified into two categories: (1) they propose new data cleaning algorithms specific to certain types of data anomalies usually considered in isolation and without a "pipeline vision" of the entire data preprocessing strategy; (2) they develop automated machine learning approaches (AutoML) that can optimize the hyper-parameters of a considered ML model with a list of by-default preprocessing methods. We argue that more efforts should be devoted to proposing a principled and adaptive data preparation approach to help and learn from the user for selecting the optimal sequence of data preparation tasks to obtain the best quality performance of the final result. In this paper, we extend Learn2Clean, a method based on Q-Learning, a model-free reinforcement learning technique that selects, for a given dataset, a given ML model, and a preselected quality performance metric, the optimal sequence of tasks for preprocessing the data such that the quality metric is maximized. We will discuss some new results of Learn2Clean for semi-automating data preparation with "the human in the loop" using active reward learning and Q-learning.
Plan de classement
Intelligence artificielle [122INTAR]
Descripteurs
INTELLIGENCE ARTIFICIELLE ; BASE DE DONNEES ; TRAITEMENT DE DONNEES ; MODELISATION ; APPRENTISSAGE ; OPTIMISATION
Localisation
Fonds IRD [F B010078412]
Identifiant IRD
fdi:010078412
Contact