17,370 research outputs found

    Collaboration, Dialogue, and Human-Robot Interaction

    Get PDF

    Toward an Argumentation-based Dialogue framework for Human-Robot Collaboration

    Full text link
    Successful human-robot collaboration with a common goal requires peer interaction in which humans and robots cooperate and complement each other\u27s expertise. Formal human-robot dialogue in which there is peer interaction is still in its infancy, though. My research recognizes three aspects of human-robot collaboration that call for dialogue: responding to discovery, pre-empting failure, and recovering from failure. In these scenarios the partners need the ability to challenge, persuade, exchange and expand beliefs about a joint action in order to collaborate through dialogue. My research identifies three argumentation-based dialogues: a persuasion dialogue to resolve disagreement, an information-seeking dialogue to expand individual knowledge, and an inquiry dialogue to share knowledge. A theoretical logic-based framework, a formalized dialogue protocol based on argumentation theory, and argumentation-based dialogue games were developed to provide dialogue support for peer interaction. The work presented in this thesis is the first to apply argumentation theory and three different logic-based argumentation dialogues for use in human-robot collaboration. The research presented in this thesis demonstrates a practical, real-time implementation in which persuasion, inquiry, and information-seeking dialogues are applied to shared decision making for human-robot collaboration in a treasure hunt game domain. My research investigates if adding peer interaction enabled through argumentation-based dialogue to an HRI system improves system performance and user experience during a collaborative task when compared to an HRI system that is capable of only supervisory interaction with minimal dialogue. Results from user studies in physical and simulated human-robot collaborative environments, which involved 108 human participants who interacted with a robot as peer and supervisor, are presented in this thesis. My research contributes to both the human-robot interaction (HRI) and the argumentation communities. First, it brings into HRI a structured method for a robot to maintain its beliefs, to reason using those beliefs, and to interact with a human as a peer via argumentation-based dialogues. The structured method allows the human-robot collaborators to share beliefs, respond to discovery, expand beliefs to recover from failure, challenge beliefs, or resolve conflicts by persuasion. It allows a robot to challenge a human or a human to challenge a robot to prevent human or robot errors. Third, my research provides a comprehensive subjective and objective analysis of the effectiveness of an HRI System with peer interaction that is enabled through argumentation-based dialogue. I compare this peer interaction to a system that is capable of only supervisory interaction with minimal dialogue. My research contributes to the harder questions for human-robot collaboration: what kind of human-robot dialogue support can enhance peer-interaction? How can we develop models to formalize those features? How can we ensure that those features really help, and how do they help? Human-robot dialogue that can aid shared decision making, support the expansion of individual or shared knowledge, and resolve disagreements between collaborative human-robot teams will be much sought after as human society transitions from a world of robot-as-a-tool to robot-as-a-partner. My research presents a version of peer interaction enabled through argumentation-based dialogue that allows humans and robots to work together as partners

    A Preliminary Study of Peer-to-Peer Human-Robot Interaction

    Get PDF
    The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance

    Towards Dialogue Dimensions for a Robotic Tutor in Collaborative Learning Scenarios

    Get PDF
    There has been some studies in applying robots to education and recent research on socially intelligent robots show robots as partners that collaborate with people. On the other hand, serious games and interaction technologies have also proved to be important pedagogical tools, enhancing collaboration and interest in the learning process. This paper relates to the collaborative scenario in EMOTE EU FP7 project and its main goal is to develop and present the dialogue dimensions for a robotic tutor in a collaborative learning scenario grounded in human studies. Overall, seven dialogue dimensions between the teacher and students interaction were identified from data collected over 10 sessions of a collaborative serious game. Preliminary results regarding the teachers perspective of the students interaction suggest that student collaboration led to learning during the game. Besides, students seem to have learned a number of concepts as they played the game. We also present the protocol that was followed for the purposes of future data collection in human-human and human-robot interaction in similar scenarios

    Explorations in engagement for humans and robots

    Get PDF
    This paper explores the concept of engagement, the process by which individuals in an interaction start, maintain and end their perceived connection to one another. The paper reports on one aspect of engagement among human interactors--the effect of tracking faces during an interaction. It also describes the architecture of a robot that can participate in conversational, collaborative interactions with engagement gestures. Finally, the paper reports on findings of experiments with human participants who interacted with a robot when it either performed or did not perform engagement gestures. Results of the human-robot studies indicate that people become engaged with robots: they direct their attention to the robot more often in interactions where engagement gestures are present, and they find interactions more appropriate when engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table

    An Augmented Reality Human-Robot Collaboration System

    Get PDF
    InvitedThis article discusses an experimental comparison of three user interface techniques for interaction with a remotely located robot. A typical interface for such a situation is to teleoperate the robot using a camera that displays the robot's view of its work environment. However, the operator often has a difficult time maintaining situation awareness due to this single egocentric view. Hence, a multimodal system was developed enabling the human operator to view the robot in its remote work environment through an augmented reality interface, the augmented reality human-robot collaboration (AR-HRC) system. The operator uses spoken dialogue, reaches into the 3D representation of the remote work environment and discusses intended actions of the robot. The result of the comparison was that the AR-HRC interface was found to be most effective, increasing accuracy by 30%, while reducing the number of close calls in operating the robot by factors of ~3x. It thus provides the means to maintain spatial awareness and give the users the feeling of working in a true collaborative environment

    Sympathy Begins with a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction

    Full text link
    Recognition of social signals, from human facial expressions or prosody of speech, is a popular research topic in human-robot interaction studies. There is also a long line of research in the spoken dialogue community that investigates user satisfaction in relation to dialogue characteristics. However, very little research relates a combination of multimodal social signals and language features detected during spoken face-to-face human-robot interaction to the resulting user perception of a robot. In this paper we show how different emotional facial expressions of human users, in combination with prosodic characteristics of human speech and features of human-robot dialogue, correlate with users' impressions of the robot after a conversation. We find that happiness in the user's recognised facial expression strongly correlates with likeability of a robot, while dialogue-related features (such as number of human turns or number of sentences per robot utterance) correlate with perceiving a robot as intelligent. In addition, we show that facial expression, emotional features, and prosody are better predictors of human ratings related to perceived robot likeability and anthropomorphism, while linguistic and non-linguistic features more often predict perceived robot intelligence and interpretability. As such, these characteristics may in future be used as an online reward signal for in-situ Reinforcement Learning based adaptive human-robot dialogue systems.Comment: Robo-NLP workshop at ACL 2017. 9 pages, 5 figures, 6 table

    Methodology and themes of human-robot interaction: a growing research field

    Get PDF
    Original article can be found at: http://www.intechweb.org/journal.php?id=3 Distributed under the Creative Commons Attribution License. Users are free to read, print, download and use the content or part of it so long as the original author(s) and source are correctly credited.This article discusses challenges of Human-Robot Interaction, which is a highly inter- and multidisciplinary area. Themes that are important in current research in this lively and growing field are identified and selected work relevant to these themes is discussed.Peer reviewe

    Challenges in Collaborative HRI for Remote Robot Teams

    Get PDF
    Collaboration between human supervisors and remote teams of robots is highly challenging, particularly in high-stakes, distant, hazardous locations, such as off-shore energy platforms. In order for these teams of robots to truly be beneficial, they need to be trusted to operate autonomously, performing tasks such as inspection and emergency response, thus reducing the number of personnel placed in harm's way. As remote robots are generally trusted less than robots in close-proximity, we present a solution to instil trust in the operator through a `mediator robot' that can exhibit social skills, alongside sophisticated visualisation techniques. In this position paper, we present general challenges and then take a closer look at one challenge in particular, discussing an initial study, which investigates the relationship between the level of control the supervisor hands over to the mediator robot and how this affects their trust. We show that the supervisor is more likely to have higher trust overall if their initial experience involves handing over control of the emergency situation to the robotic assistant. We discuss this result, here, as well as other challenges and interaction techniques for human-robot collaboration.Comment: 9 pages. Peer reviewed position paper accepted in the CHI 2019 Workshop: The Challenges of Working on Social Robots that Collaborate with People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing Systems, May 2019, Glasgow, U
    corecore