Lai S., Hu L., Wang J., Berti-Equille Laure, Wang D. (2024). Faithful vision-language interpretation via concept bottleneck models.
[s.l.] : [s.n.], 24 p. multigr. International Conference on Learning Representations, 12., Vienne (AUT), 2024/05/07-2024/05/11.
Titre du document
Faithful vision-language interpretation via concept bottleneck models
Année de publication
2024
Type de document
Colloque
Auteurs
Lai S., Hu L., Wang J., Berti-Equille Laure, Wang D.
Source
[s.l.] : [s.n.], 2024,
24 p. multigr.
Colloque
International Conference on Learning Representations, 12., Vienne (AUT), 2024/05/07-2024/05/11
The demand for transparency in healthcare and finance has led to interpretable machine learning (IML) models, notably the concept bottleneck models (CBMs), valued for their potential in performance and insights into deep neural networks. However, CBM's reliance on manually annotated data poses challenges. Label-free CBMs have emerged to address this, but they remain unstable, affecting their faithfulness as explanatory tools. To address this issue of inherent instability, we introduce a formal definition for an alternative concept called the Faithful Vision-Language Concept (FVLC) model. We present a methodology for constructing an FVLC that satisfies four critical properties. Our extensive experiments on four benchmark datasets using Label-free CBM model architectures demonstrate that our FVLC outperforms other baselines regarding stability against input and concept set perturbations. Our approach incurs minimal accuracy degradation compared to the vanilla CBM, making it a promising solution for reliable and faithful model interpretation.