140 research outputs found

    Evaluating the Augmented Reality Human-Robot Collaboration System

    Get PDF
    This paper discusses an experimental comparison of three user interface techniques for interaction with a mobile robot located remotely from the user. A typical means of operating a robot in such a situation is to teleoperate the robot using visual cues from a camera that displays the robot’s view of its work environment. However, the operator often has a difficult time maintaining awareness of the robot in its surroundings due to this single ego-centric view. Hence, a multi-modal system has been developed that allows the remote human operator to view the robot in its work environment through an Augmented Reality (AR) interface. The operator is able to use spoken dialog, reach into the 3D graphic representation of the work environment and discuss the intended actions of the robot to create a true collaboration. This study compares the typical ego-centric driven view to two versions of an AR interaction system for an experiment remotely operating a simulated mobile robot. One interface provides an immediate response from the remotely located robot. In contrast, the Augmented Reality Human-Robot Collaboration (AR-HRC) System interface enables the user to discuss and review a plan with the robot prior to execution. The AR-HRC interface was most effective, increasing accuracy by 30% with tighter variation, while reducing the number of close calls in operating the robot by factors of ~3x. It thus provides the means to maintain spatial awareness and give the users the feeling they were working in a true collaborative environment

    An Augmented Reality Human-Robot Collaboration System

    Get PDF
    InvitedThis article discusses an experimental comparison of three user interface techniques for interaction with a remotely located robot. A typical interface for such a situation is to teleoperate the robot using a camera that displays the robot's view of its work environment. However, the operator often has a difficult time maintaining situation awareness due to this single egocentric view. Hence, a multimodal system was developed enabling the human operator to view the robot in its remote work environment through an augmented reality interface, the augmented reality human-robot collaboration (AR-HRC) system. The operator uses spoken dialogue, reaches into the 3D representation of the remote work environment and discusses intended actions of the robot. The result of the comparison was that the AR-HRC interface was found to be most effective, increasing accuracy by 30%, while reducing the number of close calls in operating the robot by factors of ~3x. It thus provides the means to maintain spatial awareness and give the users the feeling of working in a true collaborative environment

    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction

    Full text link
    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated real-time augmentations of the workspace in three conditions -- mixed reality, augmented reality, and a monitor as the baseline -- using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the mixed reality condition, participants found that modality more engaging than the other two, but overall showed preference for the augmented reality condition over the monitor and mixed reality conditions

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication

    Increasing robot autonomy via motion planning and an augmented reality interface

    Get PDF
    Recently, there has been a growing interest in robotic systems that are able to share workspaces and collabo- rate with humans. Such collaborative scenarios require efficient mechanisms to communicate human requests to a robot, as well as to transmit robot interpretations and intents to humans. Recent advances in augmented reality (AR) technologies have provided an alternative for such communication. Nonetheless, most of the existing work in human-robot interaction with AR devices is still limited to robot motion programming or teleoperation. In this paper, we present an alternative approach to command and collaborate with robots. Our approach uses an AR interface that allows a user to specify high-level requests to a robot, to preview, approve or modify the computed robot motions. The proposed approach exploits the robot’s decision- making capabilities instead of requiring low-level motion spec- ifications provided by the user. The latter is achieved by using a motion planner that can deal with high-level goals corresponding to regions in the robot configuration space. We present a proof of concept to validate our approach in different test scenarios, and we present a discussion of its applicability in collaborative environments

    Configuration of skilled tasks for execution in multipurpose and collaborative service robots

    Get PDF
    Several highly versatile mobile robots have been introduced during the last ten years. Some of these robots are working among people in exhibitions and other public places, such as museums and shopping centers. Unlike industrial robots, which are typically found only in manufacturing environments, service robots can be found in a variety of places, ranging from homes to offices, and from hospitals to restaurants. Developing mobile robots working co-operatively with humans raises not only interaction problems but problems in getting tasks accomplished. In an unstructured and dynamic environment this is not readily achievable because of the high degree of complexity of perception and motion of the robots. Such tasks require high-level perception and locomotion systems, not to mention control systems for all levels of task control. The lowest levels are controlling the motors and sensors of the robots and the highest are sophisticated task planners for complex and useful tasks. Human-friendly communication can be seen as an important factor in getting robots into our homes. In this work a new task configuration concept is proposed for multipurpose service robots. The concept gives guidelines for a software architecture and task managing system. Task configuration process presents a new method which makes it easier to configure a new task for a robot. The idea is the same as when a person tells another how a task should be performed. Novel method for executing tasks with service robots is also presented. Interpretive execution, keeping the focus on only one micro task at a time, makes it possible to modify plans during their execution. Multimodal interaction is important feature to provide collaboration between humans and robots. Multimodal interaction reduces the workload of the user by administering task configuration and execution. A novel solution for using multimodal human-robot interaction (HRI) as a part of the task description is presented. This thesis is a case study reporting the results when developing a task managing (from configuring to execution) platform for multipurpose service robots and studying its performance and use with several test cases. The platform that was developed has been implemented with the WorkPartner multipurpose service robot. The structure and operation of the platform have proved to be useful and several tasks have been carried out successfully

    Human-Machine Interfaces for Service Robotics

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore