17 research outputs found

    Viewing the process of generating counterfactuals as a source of knowledge -- Application to the Naive Bayes classifier

    Full text link
    There are now many comprehension algorithms for understanding the decisions of a machine learning algorithm. Among these are those based on the generation of counterfactual examples. This article proposes to view this generation process as a source of creating a certain amount of knowledge that can be stored to be used, later, in different ways. This process is illustrated in the additive model and, more specifically, in the case of the naive Bayes classifier, whose interesting properties for this purpose are shown.Comment: 12 page

    VCNet: A self-explaining model for realistic counterfactual generation

    Get PDF
    International audienceCounterfactual explanation is a common class of methods to make local explanations of machine learning decisions. For a given instance, these methods aim to find the smallest modification of feature values that changes the predicted decision made by a machine learning model. One of the challenges of counterfactual explanation is the efficient generation of realistic counterfactuals. To address this challenge, we propose VCNet-Variational Counter Net-a model architecture that combines a predictor and a counterfactual generator that are jointly trained, for regression or classification tasks. VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem. Our contribution is the generation of counterfactuals that are close to the distribution of the predicted class. This is done by learning a variational autoencoder conditionally to the output of the predictor in a join-training fashion. We present an empirical evaluation on tabular datasets and across several interpretability metrics. The results are competitive with the state-of-the-art method

    Générer des explications contrefactuelles à l'aide d'un autoencodeur supervisé

    No full text
    International audienceIn this work, we investigate the problem of generating counterfactuals explanations that are both close to the data distribution, and to the distribution of the target class. Our objective is to obtain counterfactuals with likely values (i.e. realistic). We propose a method for generating realistic counterfactuals by using class prototypes. The novelty of this approach is that these class prototypes are obtained using a supervised auto-encoder. Then, we performed an empirical evaluation across several interpretability metrics, that shows competitive results with a state-of-the-art method.Dans cet article nous proposons une manière d'améliorer l'interprétabilité des explications contrefactuelles. Une explication contrefactuelle se présente sous la forme d'une version modifiée de la donnée à expliquer qui répond à la question : que faudrait-il changer pour obtenir une prédiction différente ? La solution proposée consiste à introduire dans le processus de génération du contrefactuel un terme basé sur un auto-encodeur supervisé. Ce terme contraint les explications générées à être proches de la distribution des données et de leur classe cible. La qualité des contrefactuels produits est évaluée sur un jeu de données d'images par le biais de différentes métriques. Nous montrons que notre solution s'avère compétitive par rapport à une méthode de référence de l'état de l'art

    Générer des explications contrefactuelles à l'aide d'un autoencodeur supervisé

    No full text
    International audienceIn this work, we investigate the problem of generating counterfactuals explanations that are both close to the data distribution, and to the distribution of the target class. Our objective is to obtain counterfactuals with likely values (i.e. realistic). We propose a method for generating realistic counterfactuals by using class prototypes. The novelty of this approach is that these class prototypes are obtained using a supervised auto-encoder. Then, we performed an empirical evaluation across several interpretability metrics, that shows competitive results with a state-of-the-art method.Dans cet article nous proposons une manière d'améliorer l'interprétabilité des explications contrefactuelles. Une explication contrefactuelle se présente sous la forme d'une version modifiée de la donnée à expliquer qui répond à la question : que faudrait-il changer pour obtenir une prédiction différente ? La solution proposée consiste à introduire dans le processus de génération du contrefactuel un terme basé sur un auto-encodeur supervisé. Ce terme contraint les explications générées à être proches de la distribution des données et de leur classe cible. La qualité des contrefactuels produits est évaluée sur un jeu de données d'images par le biais de différentes métriques. Nous montrons que notre solution s'avère compétitive par rapport à une méthode de référence de l'état de l'art

    Générer des explications contrefactuelles à l'aide d'un autoencodeur supervisé

    No full text
    International audienceIn this work, we investigate the problem of generating counterfactuals explanations that are both close to the data distribution, and to the distribution of the target class. Our objective is to obtain counterfactuals with likely values (i.e. realistic). We propose a method for generating realistic counterfactuals by using class prototypes. The novelty of this approach is that these class prototypes are obtained using a supervised auto-encoder. Then, we performed an empirical evaluation across several interpretability metrics, that shows competitive results with a state-of-the-art method.Dans cet article nous proposons une manière d'améliorer l'interprétabilité des explications contrefactuelles. Une explication contrefactuelle se présente sous la forme d'une version modifiée de la donnée à expliquer qui répond à la question : que faudrait-il changer pour obtenir une prédiction différente ? La solution proposée consiste à introduire dans le processus de génération du contrefactuel un terme basé sur un auto-encodeur supervisé. Ce terme contraint les explications générées à être proches de la distribution des données et de leur classe cible. La qualité des contrefactuels produits est évaluée sur un jeu de données d'images par le biais de différentes métriques. Nous montrons que notre solution s'avère compétitive par rapport à une méthode de référence de l'état de l'art

    Générer des explications contrefactuelles à l'aide d'un autoencodeur supervisé

    No full text
    International audienceIn this work, we investigate the problem of generating counterfactuals explanations that are both close to the data distribution, and to the distribution of the target class. Our objective is to obtain counterfactuals with likely values (i.e. realistic). We propose a method for generating realistic counterfactuals by using class prototypes. The novelty of this approach is that these class prototypes are obtained using a supervised auto-encoder. Then, we performed an empirical evaluation across several interpretability metrics, that shows competitive results with a state-of-the-art method.Dans cet article nous proposons une manière d'améliorer l'interprétabilité des explications contrefactuelles. Une explication contrefactuelle se présente sous la forme d'une version modifiée de la donnée à expliquer qui répond à la question : que faudrait-il changer pour obtenir une prédiction différente ? La solution proposée consiste à introduire dans le processus de génération du contrefactuel un terme basé sur un auto-encodeur supervisé. Ce terme contraint les explications générées à être proches de la distribution des données et de leur classe cible. La qualité des contrefactuels produits est évaluée sur un jeu de données d'images par le biais de différentes métriques. Nous montrons que notre solution s'avère compétitive par rapport à une méthode de référence de l'état de l'art

    Générer des explications contrefactuelles robustes

    No full text
    International audienceCounterfactual explanations have become a mainstay of the XAI field. This particularly intuitive statement allows the user to understand what small but necessary changes would have to be made to a given situation in order to change a model prediction. The quality of a counterfactual depends on several criteria: realism, actionability, validity, robustness, etc. In this paper, we are interested in the notion of robustness of a counterfactual. More precisely, we focus on robustness to counterfactual input changes. This form of robustness is particularly challenging as it involves a trade-off between the robustness of the counterfactual and the proximity with the example to explain. We propose a new framework, CROCO, that generates robust counterfactuals while managing effectively this trade-off, and guarantees the user a minimal robustness. An empirical evaluation on tabular datasets confirms the relevance and effectiveness of our approach

    Générer des explications contrefactuelles robustes

    No full text
    International audienceCounterfactual explanations have become a mainstay of the XAI field. This particularly intuitive statement allows the user to understand what small but necessary changes would have to be made to a given situation in order to change a model prediction. The quality of a counterfactual depends on several criteria: realism, actionability, validity, robustness, etc. In this paper, we are interested in the notion of robustness of a counterfactual. More precisely, we focus on robustness to counterfactual input changes. This form of robustness is particularly challenging as it involves a trade-off between the robustness of the counterfactual and the proximity with the example to explain. We propose a new framework, CROCO, that generates robust counterfactuals while managing effectively this trade-off, and guarantees the user a minimal robustness. An empirical evaluation on tabular datasets confirms the relevance and effectiveness of our approach

    Générer des explications contrefactuelles robustes

    No full text
    International audienceCounterfactual explanations have become a mainstay of the XAI field. This particularly intuitive statement allows the user to understand what small but necessary changes would have to be made to a given situation in order to change a model prediction. The quality of a counterfactual depends on several criteria: realism, actionability, validity, robustness, etc. In this paper, we are interested in the notion of robustness of a counterfactual. More precisely, we focus on robustness to counterfactual input changes. This form of robustness is particularly challenging as it involves a trade-off between the robustness of the counterfactual and the proximity with the example to explain. We propose a new framework, CROCO, that generates robust counterfactuals while managing effectively this trade-off, and guarantees the user a minimal robustness. An empirical evaluation on tabular datasets confirms the relevance and effectiveness of our approach

    Générer des explications contrefactuelles robustes

    No full text
    International audienceCounterfactual explanations have become a mainstay of the XAI field. This particularly intuitive statement allows the user to understand what small but necessary changes would have to be made to a given situation in order to change a model prediction. The quality of a counterfactual depends on several criteria: realism, actionability, validity, robustness, etc. In this paper, we are interested in the notion of robustness of a counterfactual. More precisely, we focus on robustness to counterfactual input changes. This form of robustness is particularly challenging as it involves a trade-off between the robustness of the counterfactual and the proximity with the example to explain. We propose a new framework, CROCO, that generates robust counterfactuals while managing effectively this trade-off, and guarantees the user a minimal robustness. An empirical evaluation on tabular datasets confirms the relevance and effectiveness of our approach
    corecore