297 research outputs found

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system

    Exploiting Sparse Semantic HD Maps for Self-Driving Vehicle Localization

    Full text link
    In this paper we propose a novel semantic localization algorithm that exploits multiple sensors and has precision on the order of a few centimeters. Our approach does not require detailed knowledge about the appearance of the world, and our maps require orders of magnitude less storage than maps utilized by traditional geometry- and LiDAR intensity-based localizers. This is important as self-driving cars need to operate in large environments. Towards this goal, we formulate the problem in a Bayesian filtering framework, and exploit lanes, traffic signs, as well as vehicle dynamics to localize robustly with respect to a sparse semantic map. We validate the effectiveness of our method on a new highway dataset consisting of 312km of roads. Our experiments show that the proposed approach is able to achieve 0.05m lateral accuracy and 1.12m longitudinal accuracy on average while taking up only 0.3% of the storage required by previous LiDAR intensity-based approaches.Comment: 8 pages, 4 figures, 4 tables, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019

    A Reconfigurable Framework for Vehicle Localization in Urban Areas

    Get PDF
    Accurate localization for autonomous vehicle operations is essential in dense urban areas. In order to ensure safety, positioning algorithms should implement fault detection and fallback strategies. While many strategies stop the vehicle once a failure is detected, in this work a new framework is proposed that includes an improved reconfiguration module to evaluate the failure scenario and offer alternative positioning strategies, allowing continued driving in degraded mode until a critical failure is detected. Furthermore, as many failures in sensors can be temporary, such as GPS signal interruption, the proposed approach allows the return to a non-fault state while resetting the alternative algorithms used in the temporary failure scenario. The proposed localization framework is validated in a series of experiments carried out in a simulation environment. Results demonstrate proper localization for the driving task even in the presence of sensor failure, only stopping the vehicle when a fully degraded state is achieved. Moreover, reconfiguration strategies have proven to consistently reset the accumulated drift of the alternative positioning algorithms, improving the overall performance and bounding the mean error.This research was funded by the University of the Basque Country UPV/EHU, grants GIU19/045 and PIF19/181, and the Government of the Basque Country by grants IT914-16, KK-2021/00123 and IT949-16

    BEV-Locator: An End-to-end Visual Semantic Localization Network Using Multi-View Images

    Full text link
    Accurate localization ability is fundamental in autonomous driving. Traditional visual localization frameworks approach the semantic map-matching problem with geometric models, which rely on complex parameter tuning and thus hinder large-scale deployment. In this paper, we propose BEV-Locator: an end-to-end visual semantic localization neural network using multi-view camera images. Specifically, a visual BEV (Birds-Eye-View) encoder extracts and flattens the multi-view images into BEV space. While the semantic map features are structurally embedded as map queries sequence. Then a cross-model transformer associates the BEV features and semantic map queries. The localization information of ego-car is recursively queried out by cross-attention modules. Finally, the ego pose can be inferred by decoding the transformer outputs. We evaluate the proposed method in large-scale nuScenes and Qcraft datasets. The experimental results show that the BEV-locator is capable to estimate the vehicle poses under versatile scenarios, which effectively associates the cross-model information from multi-view images and global semantic maps. The experiments report satisfactory accuracy with mean absolute errors of 0.052m, 0.135m and 0.251∘^\circ in lateral, longitudinal translation and heading angle degree

    Real-time performance-focused on localisation techniques for autonomous vehicle: a review

    Get PDF

    InstaGraM: Instance-level Graph Modeling for Vectorized HD Map Learning

    Full text link
    Inferring traffic object such as lane information is of foremost importance for deployment of autonomous driving. Previous approaches focus on offline construction of HD map inferred with GPS localization, which is insufficient for globally scalable autonomous driving. To alleviate these issues, we propose online HD map learning framework that detects HD map elements from onboard sensor observations. We represent the map elements as a graph; we propose InstaGraM, instance-level graph modeling of HD map that brings accurate and fast end-to-end vectorized HD map learning. Along with the graph modeling strategy, we propose end-to-end neural network composed of three stages: a unified BEV feature extraction, map graph component detection, and association via graph neural networks. Comprehensive experiments on public open dataset show that our proposed network outperforms previous models by up to 13.7 mAP with up to 33.8X faster computation time.Comment: Workshop on Vision-Centric Autonomous Driving (VCAD) at Conference on Computer Vision and Pattern Recognition (CVPR) 202
    • …
    corecore