2,680 research outputs found

    Modeling teamwork of multi-human multi-agent teams

    Get PDF
    Teamwork is important when humans work together with automated agents to perform tasks requiring monitoring, coordination, and complex decision making. While human-agent teams can bring many benefits such as higher productivity, adaptability and creativity, they may also fail for various reasons. It is important to understand the tradeoffs in teamwork. The purpose of this research is to investigate the process and outcomes of human-agent teamwork by running experiments and building quantitative simulation models. Preliminary results are discussed as well as future directions. We expect this research to deepen the under-standing of human-agent teamwork and provide recommendations for the design of teams and agents to support teamwork.This research is sponsored by the Office for Naval Research and the Air Force Office of Scientific Research

    A Survey of User Interfaces for Robot Teleoperation

    Get PDF
    Robots are used today to accomplish many tasks in society, be it in industry, at home, or as helping tools on tragic incidents. The human-robot systems currently developed span a broad variety of applications and are typically very different from one another. The interaction techniques designed for each system are also very different, although some effort has been directed in defining common properties and strategies for guiding human-robot interaction (HRI) development. This work aims to present the state-of-the-art in teleoperation interaction techniques between robots and their users. By presenting potentially useful design models and motivating discussions on topics to which the research community has been paying little attention lately, we also suggest solutions to some of the design and operational problems being faced in this area

    A Hierarchical Variable Autonomy Mixed-Initiative Framework for Human-Robot Teaming in Mobile Robotics

    Full text link
    This paper presents a Mixed-Initiative (MI) framework for addressing the problem of control authority transfer between a remote human operator and an AI agent when cooperatively controlling a mobile robot. Our Hierarchical Expert-guided Mixed-Initiative Control Switcher (HierEMICS) leverages information on the human operator's state and intent. The control switching policies are based on a criticality hierarchy. An experimental evaluation was conducted in a high-fidelity simulated disaster response and remote inspection scenario, comparing HierEMICS with a state-of-the-art Expert-guided Mixed-Initiative Control Switcher (EMICS) in the context of mobile robot navigation. Results suggest that HierEMICS reduces conflicts for control between the human and the AI agent, which is a fundamental challenge in both the MI control paradigm and also in the related shared control paradigm. Additionally, we provide statistically significant evidence of improved, navigational safety (i.e., fewer collisions), LOA switching efficiency, and conflict for control reduction.Comment: 6 pages, 4 figures, ICHMS 2022, First two Authors contributed equall

    The Internet of Robotic Things:A review of concept, added value and applications

    Get PDF
    The Internet of Robotic Things is an emerging vision that brings together pervasive sensors and objects with robotic and autonomous systems. This survey examines how the merger of robotic and Internet of Things technologies will advance the abilities of both the current Internet of Things and the current robotic systems, thus enabling the creation of new, potentially disruptive services. We discuss some of the new technological challenges created by this merger and conclude that a truly holistic view is needed but currently lacking.Funding Agency:imec ACTHINGS High Impact initiative</p

    Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets

    Get PDF
    In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems. To meet future safety-critical situations and enhance time-critical decision-making missions in dynamic environments, and to support the easy and effective managing, browsing, and searching of spatiotemporal data in a dynamic environment, we propose an asynchronous, scalable, and comprehensive spatiotemporal data organization, display, and interaction method that allows operators to navigate through spatiotemporal information rather than through the environments being examined, and to maintain all necessary global and local situation awareness. To empirically prove the viability of our approach, we developed the Event-Lens system, which generates asynchronous prioritized images to provide the operator with a manageable, comprehensive view of the information that is collected by multiple sensors. The user study and interaction mode experiments were designed and conducted. The Event-Lens system was discovered to have a consistent advantage in multiple moving-target marking-task performance measures. It was also found that participants’ attentional control, spatial ability, and action video gaming experience affected their overall performance

    From teleoperation to the cognitive human-robot interface

    Get PDF
    Robots are slowly moving from factories to mines, construction sites, public places and homes. This new type of robot or robotized working machine – field and service robots (FSR) – should be capable of performing different kinds of tasks in unstructured changing environments, not only among humans but through continuous interaction with humans. The main requirements for an FSR are mobility, advanced perception capabilities, high "intelligence" and easy interaction with humans. Although mobility and perception capabilities are no longer bottlenecks, they can nevertheless still be greatly improved. The main bottlenecks are intelligence and the human - robot interface (HRI). Despite huge efforts in "artificial intelligence" research, the robots and computers are still very "stupid" and there are no major advancements on the horizon. This emphasizes the importance of the HRI. In the subtasks, where high-level cognition or intelligence is needed, the robot has to ask for help from the operator. In addition to task commands and supervision, the HRI has to provide the possibility of exchanging information about the task and environment through continuous dialogue and even methods for direct teleoperation. The thesis describes the development from teleoperation to service robot interfaces and analyses the usability aspects of both teleoperation/telepresence systems and robot interfaces based on high-level cognitive interaction. The analogue in the development of teleoperation interfaces and HRIs is also pointed out. The teleoperation and telepresence interfaces are studied on the basis of a set of experiments in which the different enhancement-level telepresence systems were tested in different tasks of a driving type. The study is concluded by comparing the usability aspects and the feeling of presence in a telepresence system. HRIs are studied with an experimental service robot WorkPartner. Different kinds of direct teleoperation, dialogue and spatial information interfaces are presented and tested. The concepts of cognitive interface and common presence are presented. Finally, the usability aspects of a human service robot interface are discussed and evaluated.reviewe
    • 

    corecore