45 research outputs found
Modifications of the Miller definition of contrastive (counterfactual) explanations
Miller recently proposed a definition of contrastive (counterfactual)
explanations based on the well-known Halpern-Pearl (HP) definitions of causes
and (non-contrastive) explanations. Crucially, the Miller definition was based
on the original HP definition of explanations, but this has since been modified
by Halpern; presumably because the original yields counterintuitive results in
many standard examples. More recently Borner has proposed a third definition,
observing that this modified HP definition may also yield counterintuitive
results. In this paper we show that the Miller definition inherits issues found
in the original HP definition. We address these issues by proposing two
improved variants based on the more robust modified HP and Borner definitions.
We analyse our new definitions and show that they retain the spirit of the
Miller definition where all three variants satisfy an alternative unified
definition that is modular with respect to an underlying definition of
non-contrastive explanations. To the best of our knowledge this paper also
provides the first explicit comparison between the original and modified HP
definitions.Comment: Accepted by ECSQARU'2
Evaluating contrastive explanations for AI planning with non-experts:a smart home battery scenario
Multi-Granular Evaluation of Diverse Counterfactual Explanations
As a popular approach in Explainable AI (XAI), an increasing number of counterfactual explanation algorithms have been proposed in the context of making machine learning classifiers more trustworthy and transparent. This paper reports our evaluations of algorithms that can output diverse counterfactuals for one instance. We first evaluate the performance of DiCE-Random, DiCE-KDTree, DiCE-Genetic and Alibi-CFRL, taking XGBoost as the machine learning model for binary classification problems. Then, we compare their suggested feature changes with feature importance by SHAP. Moreover, our study highlights that synthetic counterfactuals, drawn from the input domain but not necessarily the training data, outperform native counterfactuals from the training data regarding data privacy and validity. This research aims to guide practitioners in choosing the most suitable algorithm for generating diverse counterfactual explanations
Measuring inconsistency in a network intrusion detection rule set based on Snort
In this preliminary study, we investigate how inconsistency in a network intrusion detection rule set can be measured. To achieve this, we first examine the structure of these rules which are based on Snort and incorporate regular expression (Regex) pattern matching. We then identify primitive elements in these rules in order to translate the rules into their (equivalent) logical forms and to establish connections between them. Additional rules from background knowledge are also introduced to make the correlations among rules more explicit. We measure the degree of inconsistency in formulae of such a rule set (using the Scoring function, Shapley inconsistency values and Blame measure for prioritized knowledge) and compare the *This is a revised and significantly extended version of [1]