17,207 research outputs found
This is not the Texture you are looking for! Introducing Novel Counterfactual Explanations for Non-Experts using Generative Adversarial Learning
With the ongoing rise of machine learning, the need for methods for
explaining decisions made by artificial intelligence systems is becoming a more
and more important topic. Especially for image classification tasks, many
state-of-the-art tools to explain such classifiers rely on visual highlighting
of important areas of the input data. Contrary, counterfactual explanation
systems try to enable a counterfactual reasoning by modifying the input image
in a way such that the classifier would have made a different prediction. By
doing so, the users of counterfactual explanation systems are equipped with a
completely different kind of explanatory information. However, methods for
generating realistic counterfactual explanations for image classifiers are
still rare. In this work, we present a novel approach to generate such
counterfactual image explanations based on adversarial image-to-image
translation techniques. Additionally, we conduct a user study to evaluate our
approach in a use case which was inspired by a healthcare scenario. Our results
show that our approach leads to significantly better results regarding mental
models, explanation satisfaction, trust, emotions, and self-efficacy than two
state-of-the art systems that work with saliency maps, namely LIME and LRP
Feature-based Learning for Diverse and Privacy-Preserving Counterfactual Explanations
Interpretable machine learning seeks to understand the reasoning process of
complex black-box systems that are long notorious for lack of explainability.
One flourishing approach is through counterfactual explanations, which provide
suggestions on what a user can do to alter an outcome. Not only must a
counterfactual example counter the original prediction from the black-box
classifier but it should also satisfy various constraints for practical
applications. Diversity is one of the critical constraints that however remains
less discussed. While diverse counterfactuals are ideal, it is computationally
challenging to simultaneously address some other constraints. Furthermore,
there is a growing privacy concern over the released counterfactual data. To
this end, we propose a feature-based learning framework that effectively
handles the counterfactual constraints and contributes itself to the limited
pool of private explanation models. We demonstrate the flexibility and
effectiveness of our method in generating diverse counterfactuals of
actionability and plausibility. Our counterfactual engine is more efficient
than counterparts of the same capacity while yielding the lowest
re-identification risks
Probabilistic Action Language pBC+
We present an ongoing research on a probabilistic extension of action language BC+. Just like BC+ is defined as a high-level notation of answer set programs for describing transition systems, the proposed language, which we call pBC+, is defined as a high-level notation of LP^{MLN} programs - a probabilistic extension of answer set programs.
As preliminary results accomplished, we illustrate how probabilistic reasoning about transition systems, such as prediction, postdiction, and planning problems, as well as probabilistic diagnosis for dynamic domains, can be modeled in pBC+ and computed using an implementation of LP^{MLN}.
For future work, we plan to develop a compiler that automatically translates pBC+ description into LP^{MLN} programs, as well as parameter learning in probabilistic action domains through LP^{MLN} weight learning. We will work on defining useful extensions of pBC+ to facilitate hypothetical/counterfactual reasoning. We will also find real-world applications, possibly in robotic domains, to empirically study the performance of this approach to probabilistic reasoning in action domains
Explaining Recommendation System Using Counterfactual Textual Explanations
Currently, there is a significant amount of research being conducted in the
field of artificial intelligence to improve the explainability and
interpretability of deep learning models. It is found that if end-users
understand the reason for the production of some output, it is easier to trust
the system. Recommender systems are one example of systems that great efforts
have been conducted to make their output more explainable. One method for
producing a more explainable output is using counterfactual reasoning, which
involves altering minimal features to generate a counterfactual item that
results in changing the output of the system. This process allows the
identification of input features that have a significant impact on the desired
output, leading to effective explanations. In this paper, we present a method
for generating counterfactual explanations for both tabular and textual
features. We evaluated the performance of our proposed method on three
real-world datasets and demonstrated a +5\% improvement on finding effective
features (based on model-based measures) compared to the baseline method
- …