159 research outputs found

    Evaluating the Capability of OpenStreetMap for Estimating Vehicle Localization Error

    Get PDF
    Accurate localization is an important part of successful autonomous driving. Recent studies suggest that when using map-based localization methods, the representation and layout of real-world phenomena within the prebuilt map is a source of error. To date, the investigations have been limited to 3D point clouds and normal distribution (ND) maps. This paper explores the potential of using OpenStreetMap (OSM) as a proxy to estimate vehicle localization error. Specifically, the experiment uses random forest regression to estimate mean 3D localization error from map matching using LiDAR scans and ND maps. Six map evaluation factors were defined for 2D geographic information in a vector format. Initial results for a 1.2 km path in Shinjuku, Tokyo, show that vehicle localization error can be estimated with 56.3% model prediction accuracy with two existing OSM data layers only. When OSM data quality issues (inconsistency and completeness) were addressed, the model prediction accuracy was improved to 73.1%

    Real-time performance-focused on localisation techniques for autonomous vehicle: a review

    Get PDF

    Estimating Autonomous Vehicle Localization Error Using 2D Geographic Information

    Get PDF
    Accurately and precisely knowing the location of the vehicle is a critical requirement for safe and successful autonomous driving. Recent studies suggest that error for map-based localization methods are tightly coupled with the surrounding environment. Considering this relationship, it is therefore possible to estimate localization error by quantifying the representation and layout of real-world phenomena. To date, existing work on estimating localization error have been limited to using self-collected 3D point cloud maps. This paper investigates the use of pre-existing 2D geographic information datasets as a proxy to estimate autonomous vehicle localization error. Seven map evaluation factors were defined for 2D geographic information in a vector format, and random forest regression was used to estimate localization error for five experiment paths in Shinjuku, Tokyo. In the best model, the results show that it is possible to estimate autonomous vehicle localization error with 69.8% of predictions within 2.5 cm and 87.4% within 5 cm

    Robust 3D IMU-LIDAR Calibration and Multi Sensor Probabilistic State Estimation

    Get PDF
    Autonomous robots are highly complex systems. In order to operate in dynamic environments, adaptability in their decision-making algorithms is a must. Thus, the internal and external information that robots obtain from sensors is critical to re-evaluate their decisions in real time. Accuracy is key in this endeavor, both from the hardware side and the modeling point of view. In order to guarantee the highest performance, sensors need to be correctly calibrated. To this end, some parameters are tuned so that the particular realization of a sensor best matches a generalized mathematical model. This step grows in complexity with the integration of multiple sensors, which is generally a requirement in order to cope with the dynamic nature of real world applications. This project aims to deal with the calibration of an inertial measurement unit, or IMU, and a Light Detection and Ranging device, or LiDAR. An offline batch optimization procedure is proposed to optimally estimate the intrinsic and extrinsic parameters of the model. Then, an online state estimation module that makes use of the aforementioned parameters and the fusion of LiDAR-inertial data for local navigation is proposed. Additionally, it incorporates real time corrections to account for the time-varying nature of the model, essential to deal with exposure to continued operation and wear and tear. Keywords: sensor fusion, multi-sensor calibration, factor graphs, batch optimization, Gaussian Processes, state estimation, LiDAR-inertial odometry, Error State Kalman Filter, Normal Distributions Transform

    Pre-Trained Driving in Localized Surroundings with Semantic Radar Information and Machine Learning

    Get PDF
    Entlang der Signalverarbeitungskette von Radar Detektionen bis zur Fahrzeugansteuerung, diskutiert diese Arbeit eine semantischen Radar Segmentierung, einen darauf aufbauenden Radar SLAM, sowie eine im Verbund realisierte autonome Parkfunktion. Die Radarsegmentierung der (statischen) Umgebung wird durch ein Radar-spezifisches neuronales Netzwerk RadarNet erreicht. Diese Segmentierung ermöglicht die Entwicklung des semantischen Radar Graph-SLAM SERALOC. Auf der Grundlage der semantischen Radar SLAM Karte wird eine beispielhafte autonome Parkfunktionalität in einem realen Versuchsträger umgesetzt. Entlang eines aufgezeichneten Referenzfades parkt die Funktion ausschließlich auf Basis der Radar Wahrnehmung mit bisher unerreichter Positioniergenauigkeit. Im ersten Schritt wird ein Datensatz von 8.2 · 10^6 punktweise semantisch gelabelten Radarpunktwolken über eine Strecke von 2507.35m generiert. Es sind keine vergleichbaren Datensätze dieser Annotationsebene und Radarspezifikation öffentlich verfügbar. Das überwachte Training der semantischen Segmentierung RadarNet erreicht 28.97% mIoU auf sechs Klassen. Außerdem wird ein automatisiertes Radar-Labeling-Framework SeRaLF vorgestellt, welches das Radarlabeling multimodal mittels Referenzkameras und LiDAR unterstützt. Für die kohärente Kartierung wird ein Radarsignal-Vorfilter auf der Grundlage einer Aktivierungskarte entworfen, welcher Rauschen und andere dynamische Mehrwegreflektionen unterdrückt. Ein speziell für Radar angepasstes Graph-SLAM-Frontend mit Radar-Odometrie Kanten zwischen Teil-Karten und semantisch separater NDT Registrierung setzt die vorgefilterten semantischen Radarscans zu einer konsistenten metrischen Karte zusammen. Die Kartierungsgenauigkeit und die Datenassoziation werden somit erhöht und der erste semantische Radar Graph-SLAM für beliebige statische Umgebungen realisiert. Integriert in ein reales Testfahrzeug, wird das Zusammenspiel der live RadarNet Segmentierung und des semantischen Radar Graph-SLAM anhand einer rein Radar-basierten autonomen Parkfunktionalität evaluiert. Im Durchschnitt über 42 autonome Parkmanöver (∅3.73 km/h) bei durchschnittlicher Manöverlänge von ∅172.75m wird ein Median absoluter Posenfehler von 0.235m und End-Posenfehler von 0.2443m erreicht, der vergleichbare Radar-Lokalisierungsergebnisse um ≈ 50% übertrifft. Die Kartengenauigkeit von veränderlichen, neukartierten Orten über eine Kartierungsdistanz von ∅165m ergibt eine ≈ 56%-ige Kartenkonsistenz bei einer Abweichung von ∅0.163m. Für das autonome Parken wurde ein gegebener Trajektorienplaner und Regleransatz verwendet

    Exploring the challenges and opportunities of image processing and sensor fusion in autonomous vehicles: A comprehensive review

    Get PDF
    Autonomous vehicles are at the forefront of future transportation solutions, but their success hinges on reliable perception. This review paper surveys image processing and sensor fusion techniques vital for ensuring vehicle safety and efficiency. The paper focuses on object detection, recognition, tracking, and scene comprehension via computer vision and machine learning methodologies. In addition, the paper explores challenges within the field, such as robustness in adverse weather conditions, the demand for real-time processing, and the integration of complex sensor data. Furthermore, we examine localization techniques specific to autonomous vehicles. The results show that while substantial progress has been made in each subfield, there are persistent limitations. These include a shortage of comprehensive large-scale testing, the absence of diverse and robust datasets, and occasional inaccuracies in certain studies. These issues impede the seamless deployment of this technology in real-world scenarios. This comprehensive literature review contributes to a deeper understanding of the current state and future directions of image processing and sensor fusion in autonomous vehicles, aiding researchers and practitioners in advancing the development of reliable autonomous driving systems

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors

    Localization in urban environments. A hybrid interval-probabilistic method

    Get PDF
    Ensuring safety has become a paramount concern with the increasing autonomy of vehicles and the advent of autonomous driving. One of the most fundamental tasks of increased autonomy is localization, which is essential for safe operation. To quantify safety requirements, the concept of integrity has been introduced in aviation, based on the ability of the system to provide timely and correct alerts when the safe operation of the systems can no longer be guaranteed. Therefore, it is necessary to assess the localization's uncertainty to determine the system's operability. In the literature, probability and set-membership theory are two predominant approaches that provide mathematical tools to assess uncertainty. Probabilistic approaches often provide accurate point-valued results but tend to underestimate the uncertainty. Set-membership approaches reliably estimate the uncertainty but can be overly pessimistic, producing inappropriately large uncertainties and no point-valued results. While underestimating the uncertainty can lead to misleading information and dangerous system failure without warnings, overly pessimistic uncertainty estimates render the system inoperative for practical purposes as warnings are fired more often. This doctoral thesis aims to study the symbiotic relationship between set-membership-based and probabilistic localization approaches and combine them into a unified hybrid localization approach. This approach enables safe operation while not being overly pessimistic regarding the uncertainty estimation. In the scope of this work, a novel Hybrid Probabilistic- and Set-Membership-based Coarse and Refined (HyPaSCoRe) Localization method is introduced. This method localizes a robot in a building map in real-time and considers two types of hybridizations. On the one hand, set-membership approaches are used to robustify and control probabilistic approaches. On the other hand, probabilistic approaches are used to reduce the pessimism of set-membership approaches by augmenting them with further probabilistic constraints. The method consists of three modules - visual odometry, coarse localization, and refined localization. The HyPaSCoRe Localization uses a stereo camera system, a LiDAR sensor, and GNSS data, focusing on localization in urban canyons where GNSS data can be inaccurate. The visual odometry module computes the relative motion of the vehicle. In contrast, the coarse localization module uses set-membership approaches to narrow down the feasible set of poses and provides the set of most likely poses inside the feasible set using a probabilistic approach. The refined localization module further refines the coarse localization result by reducing the pessimism of the uncertainty estimate by incorporating probabilistic constraints into the set-membership approach. The experimental evaluation of the HyPaSCoRe shows that it maintains the integrity of the uncertainty estimation while providing accurate, most likely point-valued solutions in real-time. Introducing this new hybrid localization approach contributes to developing safe and reliable algorithms in the context of autonomous driving
    • …
    corecore