13 research outputs found

    Explanation Needs in App Reviews: Taxonomy and Automated Detection

    Full text link
    Explainability, i.e. the ability of a system to explain its behavior to users, has become an important quality of software-intensive systems. Recent work has focused on methods for generating explanations for various algorithmic paradigms (e.g., machine learning, self-adaptive systems). There is relatively little work on what situations and types of behavior should be explained. There is also a lack of support for eliciting explainability requirements. In this work, we explore the need for explanation expressed by users in app reviews. We manually coded a set of 1,730 app reviews from 8 apps and derived a taxonomy of Explanation Needs. We also explore several approaches to automatically identify Explanation Needs in app reviews. Our best classifier identifies Explanation Needs in 486 unseen reviews of 4 different apps with a weighted F-score of 86%. Our work contributes to a better understanding of users' Explanation Needs. Automated tools can help engineers focus on these needs and ultimately elicit valid Explanation Needs

    A Semantic-based Access Control Approach for Systems of Systems

    No full text
    Access control management in a System of Systems-i.e., a collaborative environment composed of a multitude of distributed autonomous organizations|is a challenging task. To answer the challenge, in this paper we propose a novel approach that incorporates semantic technologies in the Attribute-Based Access Control (ABAC) approach. Building on the basic principles of ABAC, our approach allows for a highly expressive modeling of the context in which access decisions are made, by providing mechanisms to describe rich relationships among entities, which can evolve over time. In addition, our system works in a truly decentralized manner, which makes it suitable for geographically distributed enterprise systems. We show the feasibility in practice of our approach through some experimental results

    Explaining the Unexplainable

    No full text
    This is supplementary material to the paper "Explaining the Unexplainable]{Explaining the Unexplainable: The Impact of Misleading Explanations on Trust in Unreliable Predictions for Hardly Assessable Tasks". In an online survey experiment with 162 participants, we analyze the impact of misleading explanations on users’ perceived and demonstrated trust in a system that performs a hardly assessable task in an unreliable manner. Mersedeh Sadeghi, Daniel Pöttgen, Patrick Ebel, and Andreas Vogelsang. 2024. Explaining the Unexplainable: The Impact of Misleading Explanations on Trust in Unreliable Predictions for Hardly Assessable Tasks. In Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’24), July 1–4, 2024, Cagliari, Italy. ACM, New York, NY, USA, 17 pages. https://doi.org/10.1145/3627043.365957

    Explaining the Unexplainable: The Impact of Misleading Explanations on Trust in Unreliable Predictions for Hardly Assessable Tasks

    No full text
    To increase trust in systems, engineers strive to create explanations that are as accurate as possible. However, if the system's accuracy is compromised, providing explanations for its incorrect behavior may inadvertently lead to misleading explanations. This concern is particularly pertinent when the correctness of the system is difficult for users to judge. In an online survey experiment with 162 participants, we analyze the impact of misleading explanations on users' perceived and demonstrated trust in a system that performs a hardly assessable task in an unreliable manner. Participants who used a system that provided potentially misleading explanations rated their trust significantly higher than participants who saw the system's prediction alone. They also aligned their initial prediction with the system's prediction significantly more often. Our findings underscore the importance of exercising caution when generating explanations, especially in tasks that are inherently difficult to evaluate. The paper and supplementary materials are available at https://doi.org/10.17605/osf.io/azu72
    corecore