4,029 research outputs found

    Multi-Robot Coordination and Scheduling for Deactivation & Decommissioning

    Get PDF
    Large quantities of high-level radioactive waste were generated during WWII. This waste is being stored in facilities such as double-shell tanks in Washington, and the Waste Isolation Pilot Plant in New Mexico. Due to the dangerous nature of radioactive waste, these facilities must undergo periodic inspections to ensure that leaks are detected quickly. In this work, we provide a set of methodologies to aid in the monitoring and inspection of these hazardous facilities. This allows inspection of dangerous regions without a human operator, and for the inspection of locations where a person would not be physically able to enter. First, we describe a robot equipped with sensors which uses a modified A* path-planning algorithm to navigate in a complex environment with a tether constraint. This is then augmented with an adaptive informative path planning approach that uses the assimilated sensor data within a Gaussian Process distribution model. The model\u27s predictive outputs are used to adaptively plan the robot\u27s path, to quickly map and localize areas from an unknown field of interest. The work was validated in extensive simulation testing and early hardware tests. Next, we focused on how to assign tasks to a heterogeneous set of robots. Task assignment is done in a manner which allows for task-robot dependencies, prioritization of tasks, collision checking, and more realistic travel estimates among other improvements from the state-of-the-art. Simulation testing of this work shows an increase in the number of tasks which are completed ahead of a deadline. Finally, we consider the case where robots are not able to complete planned tasks fully autonomously and require operator assistance during parts of their planned trajectory. We present a sampling-based methodology for allocating operator attention across multiple robots, or across different parts of a more sophisticated robot. This allows few operators to oversee large numbers of robots, allowing for a more scalable robotic infrastructure. This work was tested in simulation for both multi-robot deployment, and high degree-of-freedom robots, and was also tested in multi-robot hardware deployments. The work here can allow robots to carry out complex tasks, autonomously or with operator assistance. Altogether, these three components provide a comprehensive approach towards robotic deployment within the deactivation and decommissioning tasks faced by the Department of Energy

    Experimental Validation of the Reliability-Aware Multi-UAV Coverage Path Planning Problem

    Get PDF
    Unmanned aerial vehicles (UAVs) have become crucial for various applications, necessitating reliable and time-constrained performance. Multi-UAV solutions offer advantages but require effective coordination. Traditional coverage path planning methods overlook uncertainties and individual UAV failures. To address this, reliability-aware multi-UAV coverage path planning methods optimise task allocation to maximise mission completion probabilities given a failure model. This paper presents an experimental validation of the reliability-aware approach, specifically an approach using a Greedy Genetic Algorithm (GGA). We evaluate the GGA performance in real-world environments, comparing mission reliability to computed reliability and comparing it against a traditional multi-UAV methods. The experimental validation demonstrates the practical viability and effectiveness of the reliability-aware approach, showing significant improvement in mission reliability despite the inevitable mismatch between real and assumed failure models

    Coordinated Multi-Robot Shared Autonomy Based on Scheduling and Demonstrations

    Full text link
    Shared autonomy methods, where a human operator and a robot arm work together, have enabled robots to complete a range of complex and highly variable tasks. Existing work primarily focuses on one human sharing autonomy with a single robot. By contrast, in this paper we present an approach for multi-robot shared autonomy that enables one operator to provide real-time corrections across two coordinated robots completing the same task in parallel. Sharing autonomy with multiple robots presents fundamental challenges. The human can only correct one robot at a time, and without coordination, the human may be left idle for long periods of time. Accordingly, we develop an approach that aligns the robot's learned motions to best utilize the human's expertise. Our key idea is to leverage Learning from Demonstration (LfD) and time warping to schedule the motions of the robots based on when they may require assistance. Our method uses variability in operator demonstrations to identify the types of corrections an operator might apply during shared autonomy, leverages flexibility in how quickly the task was performed in demonstrations to aid in scheduling, and iteratively estimates the likelihood of when corrections may be needed to ensure that only one robot is completing an action requiring assistance. Through a preliminary simulated study, we show that our method can decrease the overall time spent sanding by iteratively estimating the times when each robot could need assistance and generating an optimized schedule that allows the operator to provide corrections to each robot during these times.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Teamwork in controlling multiple robots

    Get PDF
    Simultaneously controlling increasing numbers of robots requires multiple operators working together as a team. Helping operators allocate attention among different robots and determining how to construct the human-robot team to promote performance and reduce workload are critical questions that must be answered in these settings. To this end, we investigated the effect of team structure and search guidance on operators' performance, subjective workload, work processes and communication. To investigate team structure in an urban search and rescue setting, we compared a pooled condition, in which team members shared control of 24 robots, with a sector condition, in which each team member control half of all the robots. For search guidance, a notification was given when the operator spent too much time on one robot and either suggested or forced the operator to change to another robot. A total of 48 participants completed the experiment with two persons forming one team. The results demonstrate that automated search guidance neither increased nor decreased performance. However, suggested search guidance decreased average task completion time in Sector teams. Search guidance also influenced operators' teleoperation behaviors. For team structure, pooled teams experienced lower subjective workload than sector teams. Pooled teams communicated more than sector teams, but sector teams teleoperated more than pool teams.United States. Office of Naval ResearchUnited States. Air Force Office of Scientific Researc

    Asynchronous control with ATR for large robot teams

    Get PDF
    In this paper, we discuss and investigate the advantages of an asynchronous display, called "image queue", tested for an urban search and rescue foraging task. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment by selecting a small number of images that together cover large portions of the area searched. This asynchronous approach allows operators to search through a large amount of data gathered by autonomous robot teams, and allows comprehensive and scalable displays to obtain a network-centric perspective for unmanned ground vehicles (UGVs). In the reported experiment automatic target recognition (ATR) was used to augment utilities based on visual coverage in selecting imagery for presentation to the operator. In the cued condition a box was drawn in the region in which a possible target was detected. In the no-cue condition no box was drawn although the target detection probability continued to play a role in the selection of imagery. We found that operators using the image queue displays missed fewer victims and relied on teleoperation less often than those using streaming video. Image queue users in the no-cue condition did better in avoiding false alarms and reported lower workload than those in the cued condition. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Autonomous surveillance robots: a decision-making framework for networked multiagent systems

    Get PDF
    This article proposes an architecture for an intelligent surveillance system, where the aim is to mitigate the burden on humans in conventional surveillance systems by incorporating intelligent interfaces, computer vision, and autonomous mobile robots. Central to the intelligent surveillance system is the application of research into planning and decision making in this novel context. In this article, we describe the robot surveillance decision problem and explain how the integration of components in our system supports fully automated decision making. Several concrete scenarios deployed in real surveillance environments exemplify both the flexibility of our system to experiment with different representations and algorithms and the portability of our system into a variety of problem contexts. Moreover, these scenarios demonstrate how planning enables robots to effectively balance surveillance objectives, autonomously performing the job of human patrols and responders.This work was partially supported by the Portuguese Fundação para a Ciência e a Tecnologia (FCT), through strategic funding for Institute for Systems and Robotics/Laboratory for Robotics and Engineering Systems (ISR/LARSyS) under grant PEst-OE/EEI/LA0021/2013 and through the Carnegie Mellon Portugal Program under grant CMU-PT/SIA/0023/2009. This study also received national funds through the FCT, with reference UID/CEC/S0021/2013, and through grant FCT UID/EEA/50009/2013 of ISR/LARSyS

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement
    • …
    corecore