308,364 research outputs found
CBR driven interactive explainable AI.
Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requires employing a combination of these explainers. We refer to such combinations as explanation strategies. This paper introduces iSee - Intelligent Sharing of Explanation Experience, an interactive platform that facilitates the reuse of explanation strategies and promotes best practices in XAI by employing the Case-based Reasoning (CBR) paradigm. iSee uses an ontology-guided approach to effectively capture explanation requirements, while a behaviour tree-driven conversational chatbot captures user experiences of interacting with the explanations and provides feedback. In a case study, we illustrate the iSee CBR system capabilities by formalising a realworld radiograph fracture detection system and demonstrating how each interactive tools facilitate the CBR processes
Explaining the uncertainty: understanding small-scale farmers’ cultural beliefs and reasoning of drought causes in Gaza Province, Southern Mozambique
This paper explores small-scale farmers’ cultural beliefs about the causes of drought events and the reasoning behind their beliefs. Cultural beliefs vary across countries, regions, communities, and social groups; this paper takes the case of farmers from Gaza Province in southern Mozambique as its focus. Findings show that the farmers have a limited knowledge and understanding of the scientific explanation about drought. Thus, farmers’ beliefs about the causes of drought are strongly based on the indigenous (the power of spirits) and Christian philosophies that attribute drought to supernatural forces, such as ancestors or God, and as a punishment for (some unknown) wrongdoings. Farmers have a distinct and under-explored repertoire of possible wrongdoings to justify the punishments driven by those cultural beliefs. Some of their reasoning is static, while some is mutable, and is based on their observation and perception of the negative, unexpected, or harmful recent or current events which happen in their surrounding environment, and which they believe could be avoided or prevented. Farmers’ beliefs about drought causes, and their underlying reasoning for those beliefs, are what will primarily influence their perception of their own capacity to adapt, their motivation to respond, and their behavioral responses. Yet, their social groups exert a great influence on their choices of response. The paper concludes that more context-specific investigations into the socio-psychological nature of farmers’ beliefs are required prior to interventions in order to better help farmers to respond to future drought risks
Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
The local explanation provides heatmaps on images to explain how
Convolutional Neural Networks (CNNs) derive their output. Due to its visual
straightforwardness, the method has been one of the most popular explainable AI
(XAI) methods for diagnosing CNNs. Through our formative study (S1), however,
we captured ML engineers' ambivalent perspective about the local explanation as
a valuable and indispensable envision in building CNNs versus the process that
exhausts them due to the heuristic nature of detecting vulnerability. Moreover,
steering the CNNs based on the vulnerability learned from the diagnosis seemed
highly challenging. To mitigate the gap, we designed DeepFuse, the first
interactive design that realizes the direct feedback loop between a user and
CNNs in diagnosing and revising CNN's vulnerability using local explanations.
DeepFuse helps CNN engineers to systemically search "unreasonable" local
explanations and annotate the new boundaries for those identified as
unreasonable in a labor-efficient manner. Next, it steers the model based on
the given annotation such that the model doesn't introduce similar mistakes. We
conducted a two-day study (S2) with 12 experienced CNN engineers. Using
DeepFuse, participants made a more accurate and "reasonable" model than the
current state-of-the-art. Also, participants found the way DeepFuse guides
case-based reasoning can practically improve their current practice. We provide
implications for design that explain how future HCI-driven design can move our
practice forward to make XAI-driven insights more actionable.Comment: 32 pages, 6 figures, 5 tables. Accepted for publication in the
Proceedings of the ACM on Human-Computer Interaction (PACM HCI), CSCW 202
Narrative based Postdictive Reasoning for Cognitive Robotics
Making sense of incomplete and conflicting narrative knowledge in the
presence of abnormalities, unobservable processes, and other real world
considerations is a challenge and crucial requirement for cognitive robotics
systems. An added challenge, even when suitably specialised action languages
and reasoning systems exist, is practical integration and application within
large-scale robot control frameworks.
In the backdrop of an autonomous wheelchair robot control task, we report on
application-driven work to realise postdiction triggered abnormality detection
and re-planning for real-time robot control: (a) Narrative-based knowledge
about the environment is obtained via a larger smart environment framework; and
(b) abnormalities are postdicted from stable-models of an answer-set program
corresponding to the robot's epistemic model. The overall reasoning is
performed in the context of an approximate epistemic action theory based
planner implemented via a translation to answer-set programming.Comment: Commonsense Reasoning Symposium, Ayia Napa, Cyprus, 201
Recommended from our members
Theory formation by abduction : initial results of a case study based on the chemical revolution
Abduction is the process of constructing explanations. This chapter suggests that automated abduction is a key to advancing beyond the "routine theory revision" methods developed in early AI research towards automated reasoning systems capable of "world model revision" — dramatic changes in systems of beliefs such as occur in children's cognitive development and in scientific revolutions. The chapter describes a general approach to automating theory revision based upon computational methods for theory formation by abduction. The approach is based on the idea that, when an anomaly is encountered, the best course is often simply to suppress parts of the original theory thrown into question by the contradiction and to derive an explanation of the anomalous observation based on relatively solid, basic principles. This process of looking for explanations of unexpected new phenomena can lead by abductive inference to new hypotheses that can form crucial parts of a revised theory. As an illustration, the chapter shows how some of Lavoisier's key insights during the Chemical Revolution can be viewed as examples of theory formation by abduction
Recommended from our members
Theory formation by abduction : a case study based on the chemical revolution
Abduction is the process of constructing explanations. This chapter suggests that automated abduction is a key to advancing beyond the "routine theory revision" methods developed in early AI research towards automated reasoning systems capable of "world model revision" - dramatic changes in systems of beliefs such as occur in children's cognitive development and in scientific revolutions. The chapter describes a general approach to automating theory revision based upon computational methods for theory formation by abduction. The approach is based on the idea that, when an anomaly is encountered, the best course is often simply to suppress parts of the original theory thrown into question by the contradiction and to derive an explanation of the anomalous observation based on relatively solid, basic principles. This process of looking for explanations of unexpected new phenomena can lead by abductive inference to new hypotheses that can form crucial parts of a revised theory. As an illustration, the chapter shows how some of Lavoisier's key insights during the Chemical Revolution can be viewed as examples of theory formation by abduction
- …