4 research outputs found

    Artificial Cognition for Social Human-Robot Interaction: An Implementation

    Get PDF
    © 2017 The Authors Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system

    Facilitating Reliable Autonomy with Human-Robot Interaction

    Get PDF
    Autonomous robots are increasingly deployed to complex environments in which we cannot predict all possible failure cases a priori. Robustness to failures can be provided by humans enacting the roles of: (1) developers who can iteratively incorporate robustness into the robot system, (2) collocated bystanders who can be approached for aid, and (3) remote teleoperators who can be contacted for guidance. However, assisting the robot in any of these roles can place demands on the time or effort of the human. This dissertation develops modules to reduce the frequency and duration of failure interventions in order to increase the reliability of autonomous robots, while also reducing the demand on humans. In pursuit of that goal, the dissertation makes the following contributions: (1) A development paradigm for autonomous robots that separates task specification from error recovery. The paradigm reduces burden on developers while making the robot robust to failures. (2) A model for gauging the interruptibility of collocated humans. A human-subjects study shows that using the model can reduce the time expended by the robot during failure recovery. (3) A human-subjects experiment on the effects of decision support provided to remote operators during failures. The results show that humans need both diagnosis and action recommendations as decision support during an intervention. (4) An evaluation of model features and unstructured Machine Learning (ML) techniques in pursuit of learning robust suggestions models from intervention data, in order to reduce developer effort. The results indicate that careful crafting of features can lead to improved performance, but that without such feature selection, current ML algorithms lack robustness in addressing a domain where the robot's observations are heavily influenced by the user's actions.Ph.D
    corecore