382,945 research outputs found

    Generating Context-Aware Contrastive Explanations in Rule-based Systems

    Full text link
    Human explanations are often contrastive, meaning that they do not answer the indeterminate "Why?" question, but instead "Why P, rather than Q?". Automatically generating contrastive explanations is challenging because the contrastive event (Q) represents the expectation of a user in contrast to what happened. We present an approach that predicts a potential contrastive event in situations where a user asks for an explanation in the context of rule-based systems. Our approach analyzes a situation that needs to be explained and then selects the most likely rule a user may have expected instead of what the user has observed. This contrastive event is then used to create a contrastive explanation that is presented to the user. We have implemented the approach as a plugin for a home automation system and demonstrate its feasibility in four test scenarios.Comment: 2024 Workshop on Explainability Engineering (ExEn '24

    SmartEx: A Framework for Generating User-Centric Explanations in Smart Environments

    Full text link
    Explainability is crucial for complex systems like pervasive smart environments, as they collect and analyze data from various sensors, follow multiple rules, and control different devices resulting in behavior that is not trivial and, thus, should be explained to the users. The current approaches, however, offer flat, static, and algorithm-focused explanations. User-centric explanations, on the other hand, consider the recipient and context, providing personalized and context-aware explanations. To address this gap, we propose an approach to incorporate user-centric explanations into smart environments. We introduce a conceptual model and a reference architecture for characterizing and generating such explanations. Our work is the first technical solution for generating context-aware and granular explanations in smart environments. Our architecture implementation demonstrates the feasibility of our approach through various scenarios.Comment: 22nd International Conference on Pervasive Computing and Communications (PerCom 2024

    Intelligibility and user control of context-aware application behaviours

    Get PDF
    Context-aware applications adapt their behaviours according to changes in user context and user requirements. Research and experience have shown that such applications will not always behave the way as users expect. This may lead to loss of users' trust and acceptance of these systems. Hence, context-aware applications should (1) be intelligible (e.g., able to explain to users why it decided to behave in a certain way), and (2) allow users to exploit the revealed information and apply appropriate feedback to control the application behaviours according to their individual preferences to achieve a more desirable outcome. Without appropriate mechanisms for explanations and control of application adaptations, the usability of the applications is limited. This paper describes our on going research and development of a conceptual framework that supports intelligibility of model based context-aware applications and user control of their adaptive behaviours. The goal is to improve usability of context-aware applications

    C-Rex: A Comprehensive System for Recommending In-Text Citations with Explanations

    Get PDF
    Finding suitable citations for scientific publications can be challenging and time-consuming. To this end, context-aware citation recommendation approaches that recommend publications as candidates for in-text citations have been developed. In this paper, we present C-Rex, a web-based demonstration system available at http://c-rex.org for context-aware citation recommendation based on the Neural Citation Network [5] and millions of publications from the Microsoft Academic Graph. Our system is one of the first online context-aware citation recommendation systems and the first to incorporate not only a deep learning recommendation approach, but also explanation components to help users better understand why papers were recommended. In our offline evaluation, our model performs similarly to the one presented in the original paper and can serve as a basic framework for further implementations. In our online evaluation, we found that the explanations of recommendations increased users’ satisfaction

    Context-aware explainable recommendations over knowledge graphs

    Full text link
    Knowledge graphs contain rich semantic relationships related to items and incorporating such semantic relationships into recommender systems helps to explore the latent connections of items, thus improving the accuracy of prediction and enhancing the explainability of recommendations. However, such explainability is not adapted to users' contexts, which can significantly influence their preferences. In this work, we propose CA-KGCN (Context-Aware Knowledge Graph Convolutional Network), an end-to-end framework that can model users' preferences adapted to their contexts and can incorporate rich semantic relationships in the knowledge graph related to items. This framework captures users' attention to different factors: contexts and features of items. More specifically, the framework can model users' preferences adapted to their contexts and provide explanations adapted to the given context. Experiments on three real-world datasets show the effectiveness of our framework: modeling users' preferences adapted to their contexts and explaining the recommendations generated
    • …
    corecore