4 research outputs found
Utilising Explanations to Mitigate Robot Conversational Failures
This paper presents an overview of robot failure detection work from HRI and
adjacent fields using failures as an opportunity to examine robot explanation
behaviours. As humanoid robots remain experimental tools in the early 2020s,
interactions with robots are situated overwhelmingly in controlled
environments, typically studying various interactional phenomena. Such
interactions suffer from real-world and large-scale experimentation and tend to
ignore the 'imperfectness' of the everyday user. Robot explanations can be used
to approach and mitigate failures, by expressing robot legibility and
incapability, and within the perspective of common-ground. In this paper, I
discuss how failures present opportunities for explanations in interactive
conversational robots and what the potentials are for the intersection of HRI
and explainability research
Neuro-symbolic Computation for XAI: Towards a Unified Model
The idea of integrating symbolic and sub-symbolic approaches to make intelligent systems (IS) understandable and explainable is at the core of new fields such as neuro-symbolic computing (NSC). This work lays under the umbrella of NSC, and aims at a twofold objective. First, we present a set of guidelines aimed at building explainable IS, which leverage on logic induction and constraints to integrate symbolic and sub-symbolic approaches. Then, we reify the proposed guidelines into a case study to show their effectiveness and potential, presenting a prototype built on the top of some NSC technologies
State2Explanation: Concept-Based Explanations to Benefit Agent Learning and User Understanding
With more complex AI systems used by non-AI experts to complete daily tasks,
there is an increasing effort to develop methods that produce explanations of
AI decision making understandable by non-AI experts. Towards this effort,
leveraging higher-level concepts and producing concept-based explanations have
become a popular method. Most concept-based explanations have been developed
for classification techniques, and we posit that the few existing methods for
sequential decision making are limited in scope. In this work, we first
contribute a desiderata for defining "concepts" in sequential decision making
settings. Additionally, inspired by the Protege Effect which states explaining
knowledge often reinforces one's self-learning, we explore the utility of
concept-based explanations providing a dual benefit to the RL agent by
improving agent learning rate, and to the end-user by improving end-user
understanding of agent decision making. To this end, we contribute a unified
framework, State2Explanation (S2E), that involves learning a joint embedding
model between state-action pairs and concept-based explanations, and leveraging
such learned model to both (1) inform reward shaping during an agent's
training, and (2) provide explanations to end-users at deployment for improved
task performance. Our experimental validations, in Connect 4 and Lunar Lander,
demonstrate the success of S2E in providing a dual-benefit, successfully
informing reward shaping and improving agent learning rate, as well as
significantly improving end user task performance at deployment time.Comment: Accepted to NeurIPS 202
On the role of Computational Logic in Data Science: representing, learning, reasoning, and explaining knowledge
In this thesis we discuss in what ways computational logic (CL) and data science (DS) can jointly contribute to the management of knowledge within the scope of modern and future artificial intelligence (AI), and how technically-sound software technologies can be realised along the path. An agent-oriented mindset permeates the whole discussion, by stressing pivotal role of autonomous agents in exploiting both means to reach higher degrees of intelligence. Accordingly, the goals of this thesis are manifold. First, we elicit the analogies and differences among CL and DS, hence looking for possible synergies and complementarities along 4 major knowledge-related dimensions, namely representation, acquisition (a.k.a. learning), inference (a.k.a. reasoning), and explanation. In this regard, we propose a conceptual framework through which bridges these disciplines can be described and designed. We then survey the current state of the art of AI technologies, w.r.t. their capability to support bridging CL and DS in practice. After detecting lacks and opportunities, we propose the notion of logic ecosystem as the new conceptual, architectural, and technological solution supporting the incremental integration of symbolic and sub-symbolic AI. Finally, we discuss how our notion of logic ecosys-
tem can be reified into actual software technology and extended towards many DS-related directions