35 research outputs found
Recommended from our members
Solving Ambiguities with Perspective Taking
Humans constantly generate and solve ambiguities while interacting with each other in their every day activities. Hence, having a robot that is able to solve ambiguous situations is essential if we aim at achieving a fluent and acceptable human-robot interaction. We propose a strategy that combines three mechanisms to clarify ambiguous situations generated by the human partner. We implemented our approach and successfully performed validation tests in several different situations both, in simulation and with the HRP-2 robot.Psycholog
Recommended from our members
Which One? Grounding the Referent Based on Efficient Human-Robot Interaction
In human-robot interaction, a robot must be prepared to handle possible ambiguities generated by a human partner. In this work we propose a set of strategies that allow a robot to identify the referent when the human partner refers to an object giving incomplete information, i.e. an ambiguous description. Moreover, we propose the use of an ontology to store and reason on the robot's knowledge to ease clarification, and therefore, improve interaction. We validate our work through both simulation and two real robotic platforms performing two tasks: a daily-life situation and a game.Psycholog
Artificial Cognition for Social Human-Robot Interaction: An Implementation
© 2017 The Authors Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system
Manipulation planning under changing external forces
This paper presents a planner that enables robots to manipulate objects under changing external forces. Particularly, we focus on the scenario where a human applies a sequence of forceful operations, e.g. cutting and drilling, on an object that is held by a robot. The planner produces an efficient manipulation plan by choosing stable grasps on the object, by intelligently deciding when the robot should change its grasp on the object as the external forces change, and by choosing subsequent grasps such that they minimize the number of regrasps required in the long-term. Furthermore, as it switches from one grasp to the other, the planner solves the bimanual regrasping in the air by using an alternating sequence of bimanual and unimanual grasps. We also present a conic formulation to address force uncertainties inherent in human-applied external forces, using which the planner can robustly assess the stability of a grasp configuration without sacrificing planning efficiency. We provide a planner implementation on a dual-arm robot and present a variety of simulated and real human-robot experiments to show the performance of our planner
Human-Robot Interaction: Tackling the AI Challenges
Human-Robot interaction is an area full of challenges for artificial intelligence: dynamic, partially unknown environments that are not originally designed for autonomous machines; a large variety of situations and objects to deal with, with possibly complex semantics; physical interactions with humans that requires fine, low-latency control, representation and management of several mental models, pertinent situation assessment skills...the list goes on. This article sheds light on some key decisional issues that are to be tackled for a cognitive robot to share space and tasks with a human, and present our take on these challenges. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. These cognitive abilities cover geometric reasoning and situation assessment mainly based on perspective-taking and affordances, management and exploitation of each agent (human and robot) knowledge in separate cognitive models, natural multi-modal communication, "human-aware" task planning, and human and robot interleaved plan achievement. We present our design choices, the articulations between the diverse deliberative components of the robot, experimental results, and eventually discuss the strengths and weaknesses of our approach. It appears that explicit knowledge management, both symbolic and geometric, proves to be key as it pushes for a different, more semantic way to address the decision-making issue in human-robot interactions