%0 Journal Article %9 ACL : Articles dans des revues avec comité de lecture répertoriées par l'AERES %A Sokolovska, N. %A Rizkalla, S. %A Clément, K. %A Zucker, Jean-Daniel %T Continuous and discrete deep classifiers for data integration %B Advances in intelligent data analysis XIV %C Cham %D 2015 %E Fromont, E. %E De Bie, T. %E Van Leeuwen, M. %L fdi:010072191 %G ENG %I Springer %@ 978-3-319-24465-5 %M ISI:000389228500023 %N 9385 %P 264-274 %R 10.1007/978-3-319-24465-5_23 %U https://www.documentation.ird.fr/hor/fdi:010072191 %> https://www.documentation.ird.fr/intranet/publi/depot/2018-02-09/010072191.pdf %W Horizon (IRD) %X Data representation in a lower dimension is needed in applications, where information comes from multiple high dimensional sources. A final compact model has to be interpreted by human experts, and interpretation of a classifier whose weights are discrete is much more straightforward. In this contribution, we propose a novel approach, called Deep Kernel Dimensionality Reduction which is designed for learning layers of new compact data representations simultaneously. We show by experiments on standard and on real large-scale biomedical data sets that the proposed method embeds data in a new compact meaningful representation, and leads to a lower classification error compared to the state-of-the-art methods. We also consider some state-of-the art deep learners and their corresponding discrete classifiers. We illustrate by our experiments that although purely discrete models do not always perform better than real-valued classifiers, the trade-off between the model accuracy and the interpretability is quite reasonable. %S Lecture Notes in Computer Science %B IDA : Intelligent Data Analysis 2015 %8 2015/10/22-24 %$ 122 ; 050 ; 054