50 research outputs found

    SA-reCBS: Multi-robot task assignment with integrated reactive path generation

    Full text link
    In this paper, we study the multi-robot task assignment and path-finding problem (MRTAPF), where a number of agents are required to visit all given goal locations while avoiding collisions with each other. We propose a novel two-layer algorithm SA-reCBS that cascades the simulated annealing algorithm and conflict-based search to solve this problem. Compared to other approaches in the field of MRTAPF, the advantage of SA-reCBS is that without requiring a pre-bundle of goals to groups with the same number of groups as the number of robots, it enables a part of agents needed to visit all goals in collision-free paths. We test the algorithm in various simulation instances and compare it with state-of-the-art algorithms. The result shows that SA-reCBS has a better performance with a higher success rate, less computational time, and better objective values

    Event Camera and LiDAR based Human Tracking for Adverse Lighting Conditions in Subterranean Environments

    Full text link
    In this article, we propose a novel LiDAR and event camera fusion modality for subterranean (SubT) environments for fast and precise object and human detection in a wide variety of adverse lighting conditions, such as low or no light, high-contrast zones and in the presence of blinding light sources. In the proposed approach, information from the event camera and LiDAR are fused to localize a human or an object-of-interest in a robot's local frame. The local detection is then transformed into the inertial frame and used to set references for a Nonlinear Model Predictive Controller (NMPC) for reactive tracking of humans or objects in SubT environments. The proposed novel fusion uses intensity filtering and K-means clustering on the LiDAR point cloud and frequency filtering and connectivity clustering on the events induced in an event camera by the returning LiDAR beams. The centroids of the clusters in the event camera and LiDAR streams are then paired to localize reflective markers present on safety vests and signs in SubT environments. The efficacy of the proposed scheme has been experimentally validated in a real SubT environment (a mine) with a Pioneer 3AT mobile robot. The experimental results show real-time performance for human detection and the NMPC-based controller allows for reactive tracking of a human or object of interest, even in complete darkness.Comment: Accepted at IFAC World Congress 202

    NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR¿s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and Defense Advanced Research Projects Agency (DARPA)

    Perception Aware Guidance Framework for Micro Aerial Vehicles

    No full text
    Micro Aerial Vehicles (MAVs) are platforms that have received significant research resources within robotics community, since they are characterized by simple mechanical design and versatile movement. These platforms possess capabilities that are suitable for complex task execution, in situations which are impossible or dangerous for the human operator to perform, as well as to reduce the operating costs and increase the overall efficiency of the operation. Until now they have been integrated in the photography-filming industry, but more and more efforts are directed towards remote reconnaissance and inspection applications. Moreover, instead of carrying only sensors these platforms could be endowed with lightweight dexterous robotic arms expanding their operational workspace allowing active interaction with the environment, capabilities that can be vital for applications like payload transportation and infrastructure maintenance. The main objective of this thesis is to establish the concept of the resource-constraint aerial robotic scout and present perception aware frameworks for guidance of the platform and the aerial manipulator as part of the enabling technology towards fully autonomous capabilities. The majority of the works has been developed aiming the application scenario of the MAV deployments in subterranean environments for search and rescue missions, infrastructure inspection and other tasks. A key factor when deploying aerial platforms in dark and cluttered underground tunnels in the lack of illumination which degrades the performance of the visual sensor. It is essential for the inspection or reconnaissance task to get visual feedback from the robot and therefore, this thesis evaluates methods for low light image enhancement in real environments and with datasets collected from flying vehicles, while proposes a preprocessing methodology of the visual dataset for enhancing the 3D mapping of the area. Another capability required when deploying the platforms is the navigation along the tunnel. This thesis establishes robocentric Non Linear Model Predictive Control (NMPC) framework for fast fully autonomous navigation of quadrotors in featureless dark tunnel environments. Additionally, this work leverages the processing of a single camera to generate direction commands along the tunnel axis, while regulating the platform’s altitude. Finally, combining the agility of MAVs with the dexterity of robotic arms leads to a new era of Aerial Robotic Workers (ARWs) with advanced capabilities, suitable for complex task execution. This technology has the potential to revolutionize infrastructure maintenance tasks. The development of efficient and reliable perception modules to guide the aerial platform at the desired target areas and perform the respective manipulation tasks is, among others, an essential step towards the envisioned goal. Thus, the aim of this work is the establishment of a visual guidance system to assist the aerial platform before applying any physical interaction. The proposed system is structured around a robust object tracker and is characterized by stereo vision capabilities for target position extraction, towards an autonomous aerial robotic worker
    corecore