307,928 research outputs found
Generating Context-Aware Contrastive Explanations in Rule-based Systems
Human explanations are often contrastive, meaning that they do not answer the
indeterminate "Why?" question, but instead "Why P, rather than Q?".
Automatically generating contrastive explanations is challenging because the
contrastive event (Q) represents the expectation of a user in contrast to what
happened. We present an approach that predicts a potential contrastive event in
situations where a user asks for an explanation in the context of rule-based
systems. Our approach analyzes a situation that needs to be explained and then
selects the most likely rule a user may have expected instead of what the user
has observed. This contrastive event is then used to create a contrastive
explanation that is presented to the user. We have implemented the approach as
a plugin for a home automation system and demonstrate its feasibility in four
test scenarios.Comment: 2024 Workshop on Explainability Engineering (ExEn '24
Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations
Post-hoc explanations of machine learning models are crucial for people to
understand and act on algorithmic predictions. An intriguing class of
explanations is through counterfactuals, hypothetical examples that show people
how to obtain a different prediction. We posit that effective counterfactual
explanations should satisfy two properties: feasibility of the counterfactual
actions given user context and constraints, and diversity among the
counterfactuals presented. To this end, we propose a framework for generating
and evaluating a diverse set of counterfactual explanations based on
determinantal point processes. To evaluate the actionability of
counterfactuals, we provide metrics that enable comparison of
counterfactual-based methods to other local explanation methods. We further
address necessary tradeoffs and point to causal implications in optimizing for
counterfactuals. Our experiments on four real-world datasets show that our
framework can generate a set of counterfactuals that are diverse and well
approximate local decision boundaries, outperforming prior approaches to
generating diverse counterfactuals. We provide an implementation of the
framework at https://github.com/microsoft/DiCE.Comment: 13 page
Analytic aspects of the shuffle product
There exist very lucid explanations of the combinatorial origins of rational
and algebraic functions, in particular with respect to regular and context free
languages. In the search to understand how to extend these natural
correspondences, we find that the shuffle product models many key aspects of
D-finite generating functions, a class which contains algebraic. We consider
several different takes on the shuffle product, shuffle closure, and shuffle
grammars, and give explicit generating function consequences. In the process,
we define a grammar class that models D-finite generating functions
SmartEx: A Framework for Generating User-Centric Explanations in Smart Environments
Explainability is crucial for complex systems like pervasive smart
environments, as they collect and analyze data from various sensors, follow
multiple rules, and control different devices resulting in behavior that is not
trivial and, thus, should be explained to the users. The current approaches,
however, offer flat, static, and algorithm-focused explanations. User-centric
explanations, on the other hand, consider the recipient and context, providing
personalized and context-aware explanations. To address this gap, we propose an
approach to incorporate user-centric explanations into smart environments. We
introduce a conceptual model and a reference architecture for characterizing
and generating such explanations. Our work is the first technical solution for
generating context-aware and granular explanations in smart environments. Our
architecture implementation demonstrates the feasibility of our approach
through various scenarios.Comment: 22nd International Conference on Pervasive Computing and
Communications (PerCom 2024
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
Providing explanations in the context of Visual Question Answering (VQA)
presents a fundamental problem in machine learning. To obtain detailed insights
into the process of generating natural language explanations for VQA, we
introduce the large-scale CLEVR-X dataset that extends the CLEVR dataset with
natural language explanations. For each image-question pair in the CLEVR
dataset, CLEVR-X contains multiple structured textual explanations which are
derived from the original scene graphs. By construction, the CLEVR-X
explanations are correct and describe the reasoning and visual information that
is necessary to answer a given question. We conducted a user study to confirm
that the ground-truth explanations in our proposed dataset are indeed complete
and relevant. We present baseline results for generating natural language
explanations in the context of VQA using two state-of-the-art frameworks on the
CLEVR-X dataset. Furthermore, we provide a detailed analysis of the explanation
generation quality for different question and answer types. Additionally, we
study the influence of using different numbers of ground-truth explanations on
the convergence of natural language generation (NLG) metrics. The CLEVR-X
dataset is publicly available at
\url{https://explainableml.github.io/CLEVR-X/}
Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach
This paper addresses the challenge of generating Counterfactual Explanations
(CEs), involving the identification and modification of the fewest necessary
features to alter a classifier's prediction for a given image. Our proposed
method, Text-to-Image Models for Counterfactual Explanations (TIME), is a
black-box counterfactual technique based on distillation. Unlike previous
methods, this approach requires solely the image and its prediction, omitting
the need for the classifier's structure, parameters, or gradients. Before
generating the counterfactuals, TIME introduces two distinct biases into Stable
Diffusion in the form of textual embeddings: the context bias, associated with
the image's structure, and the class bias, linked to class-specific features
learned by the target classifier. After learning these biases, we find the
optimal latent code applying the classifier's predicted class token and
regenerate the image using the target embedding as conditioning, producing the
counterfactual explanation. Extensive empirical studies validate that TIME can
generate explanations of comparable effectiveness even when operating within a
black-box setting.Comment: WACV 2024 Camera ready + supplementary materia
- …