48 research outputs found
Modifications of the Miller definition of contrastive (counterfactual) explanations
Miller recently proposed a definition of contrastive (counterfactual)
explanations based on the well-known Halpern-Pearl (HP) definitions of causes
and (non-contrastive) explanations. Crucially, the Miller definition was based
on the original HP definition of explanations, but this has since been modified
by Halpern; presumably because the original yields counterintuitive results in
many standard examples. More recently Borner has proposed a third definition,
observing that this modified HP definition may also yield counterintuitive
results. In this paper we show that the Miller definition inherits issues found
in the original HP definition. We address these issues by proposing two
improved variants based on the more robust modified HP and Borner definitions.
We analyse our new definitions and show that they retain the spirit of the
Miller definition where all three variants satisfy an alternative unified
definition that is modular with respect to an underlying definition of
non-contrastive explanations. To the best of our knowledge this paper also
provides the first explicit comparison between the original and modified HP
definitions.Comment: Accepted by ECSQARU'2
Reinforcement learning for source location estimation: a multi-step approach
Gas leaks present an undeniable safety concern, the ability to swiftly and accurately detect the source of a leak and pertinent details is critical for effective emergency response. The limited precision of sensors and environmental noise introduce significant uncertainty and randomness, complicating the resolution of such issues. To address these challenges, this study introduces a new approach that integrates multi-step deep reinforcement learning algorithms with Bayesian inference to estimate source information. Compared to single-step Reinforcement Learning and Entrotaxis methods, the multi-step update mechanism in this problem allows the agent to locate sources position more efficiently. This approach not only increases the search’s success rate but also decreases the number of time steps needed for successful detection. Experiments conducted in continuous and discrete environments of equal scale and parameters corroborate the efficiency of our method in tracing the source of gas leaks.<br/
Multi-Granular Evaluation of Diverse Counterfactual Explanations
As a popular approach in Explainable AI (XAI), an increasing number of counterfactual explanation algorithms have been proposed in the context of making machine learning classifiers more trustworthy and transparent. This paper reports our evaluations of algorithms that can output diverse counterfactuals for one instance. We first evaluate the performance of DiCE-Random, DiCE-KDTree, DiCE-Genetic and Alibi-CFRL, taking XGBoost as the machine learning model for binary classification problems. Then, we compare their suggested feature changes with feature importance by SHAP. Moreover, our study highlights that synthetic counterfactuals, drawn from the input domain but not necessarily the training data, outperform native counterfactuals from the training data regarding data privacy and validity. This research aims to guide practitioners in choosing the most suitable algorithm for generating diverse counterfactual explanations