217 research outputs found

    Wait, I\u27m tagged?! Toward AR in Project Aquaticus

    Get PDF
    Human-robot teaming to perform complex tasks in a large environment is limited by the human’s ability to make informed decisions. We aim to use augmented reality to convey critical information to the human to reduce cognitive workload and increase situational awareness. By bridging previous Project Aquaticus work to virtual reality in Unity 3D, we are creating a testbed to easily and repeatedly measure the effectiveness of augmented reality information display solutions to support competitive gameplay. We expect the human-robot teaming performance to be improved due to the increased situational awareness and reduced stress that the augmented reality data display provides

    Evaluating the Impact of Personalized Value Alignment in Human-Robot Interaction: Insights into Trust and Team Performance Outcomes

    Full text link
    This paper examines the effect of real-time, personalized alignment of a robot's reward function to the human's values on trust and team performance. We present and compare three distinct robot interaction strategies: a non-learner strategy where the robot presumes the human's reward function mirrors its own, a non-adaptive-learner strategy in which the robot learns the human's reward function for trust estimation and human behavior modeling, but still optimizes its own reward function, and an adaptive-learner strategy in which the robot learns the human's reward function and adopts it as its own. Two human-subject experiments with a total number of 54 participants were conducted. In both experiments, the human-robot team searches for potential threats in a town. The team sequentially goes through search sites to look for threats. We model the interaction between the human and the robot as a trust-aware Markov Decision Process (trust-aware MDP) and use Bayesian Inverse Reinforcement Learning (IRL) to estimate the reward weights of the human as they interact with the robot. In Experiment 1, we start our learning algorithm with an informed prior of the human's values/goals. In Experiment 2, we start the learning algorithm with an uninformed prior. Results indicate that when starting with a good informed prior, personalized value alignment does not seem to benefit trust or team performance. On the other hand, when an informed prior is unavailable, alignment to the human's values leads to high trust and higher perceived performance while maintaining the same objective team performance.Comment: 10 pages, 9 figures, to be published in ACM/IEEE International Conference on Human Robot Interaction. arXiv admin note: text overlap with arXiv:2309.0517

    Update NPS / January 2021

    Get PDF
    USMC Commandant, Senior Leaders Commend Fall Quarter Graduates; USMC Assistant Commandant Explores Emerging Concepts at NPS; Developing the Defensive Playbook Against Large-Scale Drone Swarms; CRUSER Funds FY21 Robotics and Autonomous Systems Researc

    NPS in the News Weekly Media Report - Dec. 1-7, 2020

    Get PDF

    Digital twin-enabled human-robot collaborative teaming towards sustainable and healthy built environments

    Get PDF
    Development of sustainable and healthy built environments (SHBE) is highly advocated to achieve collective societal good. Part of the pathway to SHBE is the engagement of robots to manage the ever-complex facilities for tasks such as inspection and disinfection. However, despite the increasing advancements of robot intelligence, it is still “mission impossible” for robots to independently undertake such open-ended problems as facility management, calling for a need to “team up” the robots with humans. Leveraging digital twin's ability to capture real-time data and inform decision-making via dynamic simulation, this study aims to develop a human-robot teaming framework for facility management to achieve sustainability and healthiness in the built environments. A digital twin-enabled prototype system is developed based on the framework. Case studies showed that the framework can safely and efficiently incorporate robotics into facility management tasks (e.g., patrolling, inspection, and cleaning) by allowing humans to plan, oversee, manage, and cooperate with the robot via the digital twin's bi-directional mechanism. The study lays out a high-level framework, under which purposeful efforts can be made to unlock digital twin's full potential in collaborating humans and robots in facility management towards SHBE

    Using Trust in Automation to Enhance Driver-(Semi)Autonomous Vehicle Interaction and Improve Team Performance

    Full text link
    Trust in robots has been gathering attention from multiple directions, as it has a special relevance in the theoretical descriptions of human-robot interactions. It is essential for reaching high acceptance and usage rates of robotic technologies in society, as well as for enabling effective human-robot teaming. Researchers have been trying to model the development of trust in robots to improve the overall “rapport” between humans and robots. Unfortunately, miscalibration of trust in automation is a common issue that jeopardizes the effectiveness of automation use. It happens when a user’s trust levels are not appropriate to the capabilities of the automation being used. Users can be: under-trusting the automation—when they do not use the functionalities that the machine can perform correctly because of a “lack of trust”; or over-trusting the automation—when, due to an “excess of trust”, they use the machine in situations where its capabilities are not adequate. The main objective of this work is to examine driver’s trust development in the ADS. We aim to model how risk factors (e.g.: false alarms and misses from the ADS) and the short term interactions associated with these risk factors influence the dynamics of drivers’ trust in the ADS. The driving context facilitates the instrumentation to measure trusting behaviors, such as drivers’ eye movements and usage time of the automated features. Our findings indicate that a reliable characterization of drivers’ trusting behaviors and a consequent estimation of trust levels is possible. We expect that these techniques will permit the design of ADSs able to adapt their behaviors to attempt to adjust driver’s trust levels. This capability could avoid under- and over trusting, which could harm their safety or their performance.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/167861/1/ISTDM-2021-Extended-Abstract-0118.pdfDescription of ISTDM-2021-Extended-Abstract-0118.pdf : PaperSEL

    Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control

    Full text link
    Designing a safe, trusted, and ethical AI may be practically impossible; however, designing AI with safe, trusted, and ethical use in mind is possible and necessary in safety and mission-critical domains like aerospace. Safe, trusted, and ethical use of AI are often used interchangeably; however, a system can be safely used but not trusted or ethical, have a trusted use that is not safe or ethical, and have an ethical use that is not safe or trusted. This manuscript serves as a primer to illuminate the nuanced differences between these concepts, with a specific focus on applications of Human-AI teaming in aerospace system control, where humans may be in, on, or out-of-the-loop of decision-making
    • …
    corecore