308,364 research outputs found

    CBR driven interactive explainable AI.

    Get PDF
    Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requires employing a combination of these explainers. We refer to such combinations as explanation strategies. This paper introduces iSee - Intelligent Sharing of Explanation Experience, an interactive platform that facilitates the reuse of explanation strategies and promotes best practices in XAI by employing the Case-based Reasoning (CBR) paradigm. iSee uses an ontology-guided approach to effectively capture explanation requirements, while a behaviour tree-driven conversational chatbot captures user experiences of interacting with the explanations and provides feedback. In a case study, we illustrate the iSee CBR system capabilities by formalising a realworld radiograph fracture detection system and demonstrating how each interactive tools facilitate the CBR processes

    Explaining the uncertainty: understanding small-scale farmers’ cultural beliefs and reasoning of drought causes in Gaza Province, Southern Mozambique

    Get PDF
    This paper explores small-scale farmers’ cultural beliefs about the causes of drought events and the reasoning behind their beliefs. Cultural beliefs vary across countries, regions, communities, and social groups; this paper takes the case of farmers from Gaza Province in southern Mozambique as its focus. Findings show that the farmers have a limited knowledge and understanding of the scientific explanation about drought. Thus, farmers’ beliefs about the causes of drought are strongly based on the indigenous (the power of spirits) and Christian philosophies that attribute drought to supernatural forces, such as ancestors or God, and as a punishment for (some unknown) wrongdoings. Farmers have a distinct and under-explored repertoire of possible wrongdoings to justify the punishments driven by those cultural beliefs. Some of their reasoning is static, while some is mutable, and is based on their observation and perception of the negative, unexpected, or harmful recent or current events which happen in their surrounding environment, and which they believe could be avoided or prevented. Farmers’ beliefs about drought causes, and their underlying reasoning for those beliefs, are what will primarily influence their perception of their own capacity to adapt, their motivation to respond, and their behavioral responses. Yet, their social groups exert a great influence on their choices of response. The paper concludes that more context-specific investigations into the socio-psychological nature of farmers’ beliefs are required prior to interventions in order to better help farmers to respond to future drought risks

    Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations

    Full text link
    The local explanation provides heatmaps on images to explain how Convolutional Neural Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has been one of the most popular explainable AI (XAI) methods for diagnosing CNNs. Through our formative study (S1), however, we captured ML engineers' ambivalent perspective about the local explanation as a valuable and indispensable envision in building CNNs versus the process that exhausts them due to the heuristic nature of detecting vulnerability. Moreover, steering the CNNs based on the vulnerability learned from the diagnosis seemed highly challenging. To mitigate the gap, we designed DeepFuse, the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations. DeepFuse helps CNN engineers to systemically search "unreasonable" local explanations and annotate the new boundaries for those identified as unreasonable in a labor-efficient manner. Next, it steers the model based on the given annotation such that the model doesn't introduce similar mistakes. We conducted a two-day study (S2) with 12 experienced CNN engineers. Using DeepFuse, participants made a more accurate and "reasonable" model than the current state-of-the-art. Also, participants found the way DeepFuse guides case-based reasoning can practically improve their current practice. We provide implications for design that explain how future HCI-driven design can move our practice forward to make XAI-driven insights more actionable.Comment: 32 pages, 6 figures, 5 tables. Accepted for publication in the Proceedings of the ACM on Human-Computer Interaction (PACM HCI), CSCW 202

    Narrative based Postdictive Reasoning for Cognitive Robotics

    Full text link
    Making sense of incomplete and conflicting narrative knowledge in the presence of abnormalities, unobservable processes, and other real world considerations is a challenge and crucial requirement for cognitive robotics systems. An added challenge, even when suitably specialised action languages and reasoning systems exist, is practical integration and application within large-scale robot control frameworks. In the backdrop of an autonomous wheelchair robot control task, we report on application-driven work to realise postdiction triggered abnormality detection and re-planning for real-time robot control: (a) Narrative-based knowledge about the environment is obtained via a larger smart environment framework; and (b) abnormalities are postdicted from stable-models of an answer-set program corresponding to the robot's epistemic model. The overall reasoning is performed in the context of an approximate epistemic action theory based planner implemented via a translation to answer-set programming.Comment: Commonsense Reasoning Symposium, Ayia Napa, Cyprus, 201
    • …
    corecore