215 research outputs found

    LiDAR-Based Place Recognition For Autonomous Driving: A Survey

    Full text link
    LiDAR-based place recognition (LPR) plays a pivotal role in autonomous driving, which assists Simultaneous Localization and Mapping (SLAM) systems in reducing accumulated errors and achieving reliable localization. However, existing reviews predominantly concentrate on visual place recognition (VPR) methods. Despite the recent remarkable progress in LPR, to the best of our knowledge, there is no dedicated systematic review in this area. This paper bridges the gap by providing a comprehensive review of place recognition methods employing LiDAR sensors, thus facilitating and encouraging further research. We commence by delving into the problem formulation of place recognition, exploring existing challenges, and describing relations to previous surveys. Subsequently, we conduct an in-depth review of related research, which offers detailed classifications, strengths and weaknesses, and architectures. Finally, we summarize existing datasets, commonly used evaluation metrics, and comprehensive evaluation results from various methods on public datasets. This paper can serve as a valuable tutorial for newcomers entering the field of place recognition and for researchers interested in long-term robot localization. We pledge to maintain an up-to-date project on our website https://github.com/ShiPC-AI/LPR-Survey.Comment: 26 pages,13 figures, 5 table

    Enabling Multi-LiDAR Sensing in GNSS-Denied Environments: SLAM Dataset, Benchmark, and UAV Tracking with LiDAR-as-a-camera

    Get PDF
    The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV). Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform. Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future

    Challenges and solutions for autonomous ground robot scene understanding and navigation in unstructured outdoor environments: A review

    Get PDF
    The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the development of fully autonomous mobile robots remains challenging due to the complexity of understanding these environments. Many autonomous mobile robots use classical, learning-based or hybrid approaches for navigation. More recent learning-based methods may replace the complete navigation pipeline or selected stages of the classical approach. For effective deployment, autonomous robots must understand their external environments at a sophisticated level according to their intended applications. Therefore, in addition to robot perception, scene analysis and higher-level scene understanding (e.g., traversable/non-traversable, rough or smooth terrain, etc.) are required for autonomous robot navigation in unstructured outdoor environments. This paper provides a comprehensive review and critical analysis of these methods in the context of their applications to the problems of robot perception and scene understanding in unstructured environments and the related problems of localisation, environment mapping and path planning. State-of-the-art sensor fusion methods and multimodal scene understanding approaches are also discussed and evaluated within this context. The paper concludes with an in-depth discussion regarding the current state of the autonomous ground robot navigation challenge in unstructured outdoor environments and the most promising future research directions to overcome these challenges

    Have I been here before? Learning to Close the Loop with LiDAR Data in Graph-Based SLAM

    Get PDF
    This work presents an extension of graph-based SLAM methods to exploit the potential of 3D laser scans for loop detection. Every high-dimensional point cloud is replaced by a compact global descriptor, whereby a trained detector decides whether a loop exists. Searching for loops is performed locally in a variable space to consider the odometry drift. Since closing a wrong loop has fatal consequences, an extensive verification is performed before acceptance. The proposed algorithm is implemented as an extension of the widely used state-of-the-art library RTAB-Map, and several experiments show the improvement: During SLAM with a mobile service robot in changing indoor and outdoor campus environments, our approach improves RTABMap regarding total number of closed loops. Especially in the presence of significant environmental changes, which typically lead to failure, localization becomes possible by our extension. Experiments with a car in traffic (KITTI benchmark) show the general applicability of our approach. These results are comparable to the state-of-the-art LiDAR method LOAM. The developed ROS package is freely available.© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    Perception and localization techniques for navigation in agricultural environment and experimental results

    Get PDF
    Notoriously, the agricultural work environment is very hard, where the operator manually carry out any job, often in extreme weather conditions or anyway heat, cold and rain, or simply where the working hours last from dawn to sunset. Recently, the application of automation in agriculture is leading to the development of increasingly autonomous robots, able to take care of different tasks and avoid obstacles, to collaborate and interact with human operators and collect data from the surrounding environment. The latter can then be shared with the user, informing him about the soil moisture rather than the critical health conditions of a single plant. Thus borns the concept of precision agriculture, in which the robot performs its tasks according to the environment conditions it detects, distributing fertilizers or water only where necessary and optimizing treatments and its energy resources. The proposed thesis project consists in the development of a tractor prototype able to automatically act in agricultural semi-structured environment, like orchards organized in rows, and navigating autonomously by means of a laser scanner. In particular, the work is divided into three steps. The first consists in design and construction of a tracked robot, which has been completely realized in the laboratory, from mechanical, electric and electronic subsystems up to the software structure. The second is the development of a navigation and control system, which makes a generic robot able to move autonomously in the orchard using a laser scanner as main sensor. To achieve this goal, a localization algorithm based on rows estimation has been developed. Moreover, a control law has been designed, which regulates the kinematics of the robot. Once the navigation algorithm has been defined, it is necessary to validate it. Indeed, third point consists of experimental tests, with the aim of testing both robot and developed navigation algorithm

    An autonomous ultra-wide band-based attitude and position determination technique for indoor mobile laser scanning

    Get PDF
    Mobile laser scanning (MLS) has been widely used in three-dimensional (3D) city modelling data collection, such as Google cars for Google Map/Earth. Building Information Modelling (BIM) has recently emerged and become prominent. 3D models of buildings are essential for BIM. Static laser scanning is usually used to generate 3D models for BIM, but this method is inefficient if a building is very large, or it has many turns and narrow corridors. This paper proposes using MLS for BIM 3D data collection. The positions and attitudes of the mobile laser scanner are important for the correct georeferencing of the 3D models. This paper proposes using three high-precision ultra-wide band (UWB) tags to determine the positions and attitudes of the mobile laser scanner. The accuracy of UWB-based MLS 3D models is assessed by comparing the coordinates of target points, as measured by static laser scanning and a total station survey

    System Development of an Unmanned Ground Vehicle and Implementation of an Autonomous Navigation Module in a Mine Environment

    Get PDF
    There are numerous benefits to the insights gained from the exploration and exploitation of underground mines. There are also great risks and challenges involved, such as accidents that have claimed many lives. To avoid these accidents, inspections of the large mines were carried out by the miners, which is not always economically feasible and puts the safety of the inspectors at risk. Despite the progress in the development of robotic systems, autonomous navigation, localization and mapping algorithms, these environments remain particularly demanding for these systems. The successful implementation of the autonomous unmanned system will allow mine workers to autonomously determine the structural integrity of the roof and pillars through the generation of high-fidelity 3D maps. The generation of the maps will allow the miners to rapidly respond to any increasing hazards with proactive measures such as: sending workers to build/rebuild support structure to prevent accidents. The objective of this research is the development, implementation and testing of a robust unmanned ground vehicle (UGV) that will operate in mine environments for extended periods of time. To achieve this, a custom skid-steer four-wheeled UGV is designed to operate in these challenging underground mine environments. To autonomously navigate these environments, the UGV employs the use of a Light Detection and Ranging (LiDAR) and tactical grade inertial measurement unit (IMU) for the localization and mapping through a tightly-coupled LiDAR Inertial Odometry via Smoothing and Mapping framework (LIO-SAM). The autonomous navigation module was implemented based upon the Fast likelihood-based collision avoidance with an extension to human-guided navigation and a terrain traversability analysis framework. In order to successfully operate and generate high-fidelity 3D maps, the system was rigorously tested in different environments and terrain to verify its robustness. To assess the capabilities, several localization, mapping and autonomous navigation missions were carried out in a coal mine environment. These tests allowed for the verification and tuning of the system to be able to successfully autonomously navigate and generate high-fidelity maps

    Close Formation Flight Missions Using Vision-Based Position Detection System

    Get PDF
    In this thesis, a formation flight architecture is described along with the implementation and evaluation of a state-of-the-art vision-based algorithm for solving the problem of estimating and tracking a leader vehicle within a close-formation configuration. A vision-based algorithm that uses Darknet architecture and a formation flight control law to track and follow a leader with desired clearance in forward, lateral directions are developed and implemented. The architecture is run on a flight computer that handles the process in real-time while integrating navigation sensors and a stereo camera. Numerical simulations along with indoor and outdoor actual flight tests demonstrate the capabilities of detection and tracking by providing a low cost, compact size and low weight solution for the problem of estimating the location of other cooperative or non-cooperative flying vehicles within a formation architecture
    • …
    corecore