16 research outputs found
Generating Context-Aware Contrastive Explanations in Rule-based Systems
Human explanations are often contrastive, meaning that they do not answer the
indeterminate "Why?" question, but instead "Why P, rather than Q?".
Automatically generating contrastive explanations is challenging because the
contrastive event (Q) represents the expectation of a user in contrast to what
happened. We present an approach that predicts a potential contrastive event in
situations where a user asks for an explanation in the context of rule-based
systems. Our approach analyzes a situation that needs to be explained and then
selects the most likely rule a user may have expected instead of what the user
has observed. This contrastive event is then used to create a contrastive
explanation that is presented to the user. We have implemented the approach as
a plugin for a home automation system and demonstrate its feasibility in four
test scenarios.Comment: 2024 Workshop on Explainability Engineering (ExEn '24
SmartEx: A Framework for Generating User-Centric Explanations in Smart Environments
Explainability is crucial for complex systems like pervasive smart
environments, as they collect and analyze data from various sensors, follow
multiple rules, and control different devices resulting in behavior that is not
trivial and, thus, should be explained to the users. The current approaches,
however, offer flat, static, and algorithm-focused explanations. User-centric
explanations, on the other hand, consider the recipient and context, providing
personalized and context-aware explanations. To address this gap, we propose an
approach to incorporate user-centric explanations into smart environments. We
introduce a conceptual model and a reference architecture for characterizing
and generating such explanations. Our work is the first technical solution for
generating context-aware and granular explanations in smart environments. Our
architecture implementation demonstrates the feasibility of our approach
through various scenarios.Comment: 22nd International Conference on Pervasive Computing and
Communications (PerCom 2024
Explanation Needs in App Reviews: Taxonomy and Automated Detection
Explainability, i.e. the ability of a system to explain its behavior to
users, has become an important quality of software-intensive systems. Recent
work has focused on methods for generating explanations for various algorithmic
paradigms (e.g., machine learning, self-adaptive systems). There is relatively
little work on what situations and types of behavior should be explained. There
is also a lack of support for eliciting explainability requirements. In this
work, we explore the need for explanation expressed by users in app reviews. We
manually coded a set of 1,730 app reviews from 8 apps and derived a taxonomy of
Explanation Needs. We also explore several approaches to automatically identify
Explanation Needs in app reviews. Our best classifier identifies Explanation
Needs in 486 unseen reviews of 4 different apps with a weighted F-score of 86%.
Our work contributes to a better understanding of users' Explanation Needs.
Automated tools can help engineers focus on these needs and ultimately elicit
valid Explanation Needs
Interoperability of heterogeneous Systems of Systems: from requirements to a reference architecture
Interoperability stands as a critical hurdle in developing and overseeing distributed and collaborative systems. Thus, it becomes imperative to gain a deep comprehension of the primary obstacles hindering interoperability and the essential criteria that systems must satisfy to achieve it. In light of this objective, in the initial phase of this research, we conducted a survey questionnaire involving stakeholders and practitioners engaged in distributed and collaborative systems. This effort resulted in the identification of eight essential interoperability requirements, along with their corresponding challenges. Then, the second part of our study encompassed a critical review of the literature to assess the effectiveness of prevailing conceptual approaches and associated technologies in addressing the identified requirements. This analysis led to the identification of a set of components that promise to deliver the desired interoperability by addressing the requirements identified earlier. These elements subsequently form the foundation for the third part of our study, a reference architecture for interoperability-fostering frameworks that is proposed in this paper. The results of our research can significantly impact the software engineering of interoperable systems by introducing their fundamental requirements and the best practices to address them, but also by identifying the key elements of a framework facilitating interoperability in Systems of Systems
Recommended from our members
Preface
This volumne presents the proceedings of the 1st International Workshop
on Approaches for Making Data Interoperable (AMAR 2019) and 1st
International Workshop on Semantics for Transport (Sem4Tra) held in
Karlsruhe, Germany, September 9, 2019, co-located with SEMANTiCS 2019.
Interoperability of data is an important factor to make transportation data
accessible, therefore we present the topics alongside each other in this proceedings
A Semantic-based Access Control Approach for Systems of Systems
Access control management in a System of Systems-i.e., a collaborative environment composed of a multitude of distributed autonomous organizations|is a challenging task. To answer the challenge, in this paper we propose a novel approach that incorporates semantic technologies in the Attribute-Based Access Control (ABAC) approach. Building on the basic principles of ABAC, our approach allows for a highly expressive modeling of the context in which access decisions are made, by providing mechanisms to describe rich relationships among entities, which can evolve over time. In addition, our system works in a truly decentralized manner, which makes it suitable for geographically distributed enterprise systems. We show the feasibility in practice of our approach through some experimental results
Explaining the Unexplainable
This is supplementary material to the paper "Explaining the Unexplainable]{Explaining the Unexplainable: The Impact of Misleading Explanations on Trust in Unreliable Predictions for Hardly Assessable Tasks". In an online survey experiment with 162 participants, we analyze the impact of misleading explanations on users’ perceived and demonstrated trust in a system that performs a hardly assessable task in an unreliable manner.
Mersedeh Sadeghi, Daniel Pöttgen, Patrick Ebel, and Andreas Vogelsang. 2024. Explaining the Unexplainable: The Impact of Misleading Explanations on Trust in Unreliable Predictions for Hardly Assessable Tasks. In Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’24), July 1–4, 2024, Cagliari, Italy. ACM, New York, NY, USA, 17 pages.
https://doi.org/10.1145/3627043.365957
Explaining the Unexplainable: The Impact of Misleading Explanations on Trust in Unreliable Predictions for Hardly Assessable Tasks
To increase trust in systems, engineers strive to create explanations that are as accurate as possible. However, if the system's accuracy is compromised, providing explanations for its incorrect behavior may inadvertently lead to misleading explanations. This concern is particularly pertinent when the correctness of the system is difficult for users to judge. In an online survey experiment with 162 participants, we analyze the impact of misleading explanations on users' perceived and demonstrated trust in a system that performs a hardly assessable task in an unreliable manner. Participants who used a system that provided potentially misleading explanations rated their trust significantly higher than participants who saw the system's prediction alone. They also aligned their initial prediction with the system's prediction significantly more often. Our findings underscore the importance of exercising caution when generating explanations, especially in tasks that are inherently difficult to evaluate. The paper and supplementary materials are available at https://doi.org/10.17605/osf.io/azu72