3,748 research outputs found

    Magician simulator — A realistic simulator for heterogeneous teams of autonomous robots

    Get PDF
    We report on the development of a new simulation environment for use in Multi-Robot Learning, Swarm Robotics, Robot Teaming, Human Factors and Operator Training. The simulator provides a realistic environment for examining methods for localization and navigation, sensor analysis, object identification and tracking, as well as strategy development, interface refinement and operator training (based on various degrees of heterogeneity, robot teaming, and connectivity). The simulation additionally incorporates real-time human-robot interaction and allows hybrid operation with a mix of simulated and real robots and sensor inputs

    Wait, I\u27m tagged?! Toward AR in Project Aquaticus

    Get PDF
    Human-robot teaming to perform complex tasks in a large environment is limited by the human’s ability to make informed decisions. We aim to use augmented reality to convey critical information to the human to reduce cognitive workload and increase situational awareness. By bridging previous Project Aquaticus work to virtual reality in Unity 3D, we are creating a testbed to easily and repeatedly measure the effectiveness of augmented reality information display solutions to support competitive gameplay. We expect the human-robot teaming performance to be improved due to the increased situational awareness and reduced stress that the augmented reality data display provides

    Explainability in human-robot teaming

    Get PDF
    In human-robot teaming, one of the crucial keys for the team’s success is that the robot and human teammates can collaborate accordingly in a coordinated manner. Each teammate should be aware of what the other teammate is going to perform and likely to need. In this context, a robot is expected to understand human teammate intention and performance and explain its actions and decisions and its rationale to its teammate. In addition, the capability to model the expectation of a human teammate empowers the robot to collaborate with human understandably and expectedly, leading to effective teaming. Through forming mental modelling, the robot can understand the impact of its own behaviour on the mental model of the human. In addition, the desirable traits in human-robot teaming, including fluent behaviour, adaptability, trust-building, effective communication, and explainability, can be achieved through mental modelling. In this work, we introduce a scenario for human-robot teaming considering all the five desirable traits in teaming with the main focus on explainability and effective communication. Using a general model reconciliation, the expectation of the human teammate of the robot can be modelled, and the explanation can be generated. In a considered scenario including Care-O-bot 4 service robot and a human teammate, we assume that the robot detects the human’s current task (analysing his body gesture) and predicts his following action and his expectation from the robot. In a reciprocal interdependence task, the robot coordinates his behaviour and acts accordingly by picking up the relevant tool. Through explanation and communication robot further offers the outcome of his decision to the human teammate and adapts its action by handing the tool to the human upon his desire

    Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming

    Full text link
    We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the robot is based on some reward function they try to optimize. We use a new human trust-behavior model that enables the robot to learn and adapt to the human's preferences in real-time during their interaction using Bayesian Inverse Reinforcement Learning. We present three strategies for the robot to interact with a human: a non-learner strategy, in which the robot assumes that the human's reward function is the same as the robot's, a non-adaptive learner strategy that learns the human's reward function for performance estimation, but still optimizes its own reward function, and an adaptive-learner strategy that learns the human's reward function for performance estimation and also optimizes this learned reward function. Results show that adapting to the human's reward function results in the highest trust in the robot.Comment: 6 pages, 6 figures, AAAI Fall Symposium on Agent Teaming in Mixed-Motive Situation
    • …
    corecore