1,197 research outputs found

    Challenges and solutions for autonomous ground robot scene understanding and navigation in unstructured outdoor environments: A review

    Get PDF
    The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the development of fully autonomous mobile robots remains challenging due to the complexity of understanding these environments. Many autonomous mobile robots use classical, learning-based or hybrid approaches for navigation. More recent learning-based methods may replace the complete navigation pipeline or selected stages of the classical approach. For effective deployment, autonomous robots must understand their external environments at a sophisticated level according to their intended applications. Therefore, in addition to robot perception, scene analysis and higher-level scene understanding (e.g., traversable/non-traversable, rough or smooth terrain, etc.) are required for autonomous robot navigation in unstructured outdoor environments. This paper provides a comprehensive review and critical analysis of these methods in the context of their applications to the problems of robot perception and scene understanding in unstructured environments and the related problems of localisation, environment mapping and path planning. State-of-the-art sensor fusion methods and multimodal scene understanding approaches are also discussed and evaluated within this context. The paper concludes with an in-depth discussion regarding the current state of the autonomous ground robot navigation challenge in unstructured outdoor environments and the most promising future research directions to overcome these challenges

    Traversability Estimation from RGB Images and Height Map

    Get PDF
    Odhad traversability je důležitá úloha pro autonomní mobilní roboty. Ti by měli být schopni rozhodnout o traversabilitě svého okolí, aby byli bezpečně naváděni. V této práci je navržena metoda spojení hloubkových měření v podobě výškových map s RGB obrázky. Náš přístup se skládá z nejmodernějších metod analýzy obou, tedy konvolučních neuronových sítí. Používáme self-supervised učení konvolučních neuronových sítí na reálných datasetech. Datasety se skládají z několika různých prostředí, jako jsou doly, chodby, schodiště a další běžné venkovní terény (tráva, cesta, chodník). Naše síť poskytuje správný odhad na jednodušších terénech, jako jsou chodby nebo rovný terén, a přijatelné výsledky pro náročný terén, jako schody nebo měkké překážky (např. vysoká tráva).Traversability estimation is an important task for autonomous mobile robots. They should be able to decide about traversability in their surroundings to be safely navigated. In this thesis, the method of merging depth measurements as heightmaps with RGB images is proposed. Our approach consists from state-of-the-art methods for analysis of both, which are convolutional neural networks. We used self-supervised learning of convolutional neural networks on real datasets. Datasets consist from various environments such as mines, hallways, staircases and other common outdoor terrains (grass, road, pavement). Our network provides correct estimation for easier terrain such as hallways or flat terrain, and acceptable results as for challenging environments such as staircases or soft obstacles (e. g. high grass)

    Learning to See Physical Properties with Active Sensing Motor Policies

    Full text link
    Knowledge of terrain's physical properties inferred from color images can aid in making efficient robotic locomotion plans. However, unlike image classification, it is unintuitive for humans to label image patches with physical properties. Without labeled data, building a vision system that takes as input the observed terrain and predicts physical properties remains challenging. We present a method that overcomes this challenge by self-supervised labeling of images captured by robots during real-world traversal with physical property estimators trained in simulation. To ensure accurate labeling, we introduce Active Sensing Motor Policies (ASMP), which are trained to explore locomotion behaviors that increase the accuracy of estimating physical parameters. For instance, the quadruped robot learns to swipe its foot against the ground to estimate the friction coefficient accurately. We show that the visual system trained with a small amount of real-world traversal data accurately predicts physical parameters. The trained system is robust and works even with overhead images captured by a drone despite being trained on data collected by cameras attached to a quadruped robot walking on the ground.Comment: In CoRL 2023. Website: https://gmargo11.github.io/active-sensing-loco

    GANav: Group-wise Attention Network for Classifying Navigable Regions in Unstructured Outdoor Environments

    Full text link
    We present a new learning-based method for identifying safe and navigable regions in off-road terrains and unstructured environments from RGB images. Our approach consists of classifying groups of terrain classes based on their navigability levels using coarse-grained semantic segmentation. We propose a bottleneck transformer-based deep neural network architecture that uses a novel group-wise attention mechanism to distinguish between navigability levels of different terrains.Our group-wise attention heads enable the network to explicitly focus on the different groups and improve the accuracy. In addition, we propose a dynamic weighted cross entropy loss function to handle the long-tailed nature of the dataset. We show through extensive evaluations on the RUGD and RELLIS-3D datasets that our learning algorithm improves the accuracy of visual perception in off-road terrains for navigation. We compare our approach with prior work on these datasets and achieve an improvement over the state-of-the-art mIoU by 6.74-39.1% on RUGD and 3.82-10.64% on RELLIS-3D

    ConservationBots: Autonomous Aerial Robot for Fast Robust Wildlife Tracking in Complex Terrains

    Full text link
    Today, the most widespread, widely applicable technology for gathering data relies on experienced scientists armed with handheld radio telemetry equipment to locate low-power radio transmitters attached to wildlife from the ground. Although aerial robots can transform labor-intensive conservation tasks, the realization of autonomous systems for tackling task complexities under real-world conditions remains a challenge. We developed ConservationBots-small aerial robots for tracking multiple, dynamic, radio-tagged wildlife. The aerial robot achieves robust localization performance and fast task completion times -- significant for energy-limited aerial systems while avoiding close encounters with potential, counter-productive disturbances to wildlife. Our approach overcomes the technical and practical problems posed by combining a lightweight sensor with new concepts: i) planning to determine both trajectory and measurement actions guided by an information-theoretic objective, which allows the robot to strategically select near-instantaneous range-only measurements to achieve faster localization, and time-consuming sensor rotation actions to acquire bearing measurements and achieve robust tracking performance; ii) a bearing detector more robust to noise and iii) a tracking algorithm formulation robust to missed and false detections experienced in real-world conditions. We conducted extensive studies: simulations built upon complex signal propagation over high-resolution elevation data on diverse geographical terrains; field testing; studies with wombats (Lasiorhinus latifrons; nocturnal, vulnerable species dwelling in underground warrens) and tracking comparisons with a highly experienced biologist to validate the effectiveness of our aerial robot and demonstrate the significant advantages over the manual method.Comment: 33 pages, 21 figure

    Adaptive and intelligent navigation of autonomous planetary rovers - A survey

    Get PDF
    The application of robotics and autonomous systems in space has increased dramatically. The ongoing Mars rover mission involving the Curiosity rover, along with the success of its predecessors, is a key milestone that showcases the existing capabilities of robotic technology. Nevertheless, there has still been a heavy reliance on human tele-operators to drive these systems. Reducing the reliance on human experts for navigational tasks on Mars remains a major challenge due to the harsh and complex nature of the Martian terrains. The development of a truly autonomous rover system with the capability to be effectively navigated in such environments requires intelligent and adaptive methods fitting for a system with limited resources. This paper surveys a representative selection of work applicable to autonomous planetary rover navigation, discussing some ongoing challenges and promising future research directions from the perspectives of the authors

    Autonomous navigation strategies for UGVs/UAVs

    Get PDF

    Vision based obstacle detection for all-terrain robots

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de ComputadoresThis dissertation presents a solution to the problem of obstacle detection in all-terrain environments,with particular interest for mobile robots equipped with a stereo vision sensor. Despite the advantages of vision, over other kind of sensors, such as low cost, light weight and reduced energetic footprint, its usage still presents a series of challenges. These include the difficulty in dealing with the considerable amount of generated data, and the robustness required to manage high levels of noise. Such problems can be diminished by making hard assumptions, like considering that the terrain in front of the robot is planar. Although computation can be considerably saved, such simplifications are not necessarily acceptable in more complex environments, where the terrain may be considerably uneven. This dissertation proposes to extend a well known obstacle detector that relaxes the aforementioned planar terrain assumption, thus rendering it more adequate for unstructured environments. The proposed extensions involve: (1) the introduction of a visual saliency mechanism to focus the detection in regions most likely to contain obstacles; (2) voting filters to diminish sensibility to noise; and (3) the fusion of the detector with a complementary method to create a hybrid solution, and thus, more robust. Experimental results obtained with demanding all-terrain images show that, with the proposed extensions, an increment in terms of robustness and computational efficiency over the original algorithm is observe
    corecore