Utilising Explanations to Mitigate Robot Conversational Failures

Abstract

This paper presents an overview of robot failure detection work from HRI and adjacent fields using failures as an opportunity to examine robot explanation behaviours. As humanoid robots remain experimental tools in the early 2020s, interactions with robots are situated overwhelmingly in controlled environments, typically studying various interactional phenomena. Such interactions suffer from real-world and large-scale experimentation and tend to ignore the 'imperfectness' of the everyday user. Robot explanations can be used to approach and mitigate failures, by expressing robot legibility and incapability, and within the perspective of common-ground. In this paper, I discuss how failures present opportunities for explanations in interactive conversational robots and what the potentials are for the intersection of HRI and explainability research

    Similar works

    Full text

    thumbnail-image

    Available Versions