3,123 research outputs found

    Estimating Autonomous Vehicle Localization Error Using 2D Geographic Information

    Get PDF
    Accurately and precisely knowing the location of the vehicle is a critical requirement for safe and successful autonomous driving. Recent studies suggest that error for map-based localization methods are tightly coupled with the surrounding environment. Considering this relationship, it is therefore possible to estimate localization error by quantifying the representation and layout of real-world phenomena. To date, existing work on estimating localization error have been limited to using self-collected 3D point cloud maps. This paper investigates the use of pre-existing 2D geographic information datasets as a proxy to estimate autonomous vehicle localization error. Seven map evaluation factors were defined for 2D geographic information in a vector format, and random forest regression was used to estimate localization error for five experiment paths in Shinjuku, Tokyo. In the best model, the results show that it is possible to estimate autonomous vehicle localization error with 69.8% of predictions within 2.5 cm and 87.4% within 5 cm

    Find your Way by Observing the Sun and Other Semantic Cues

    Full text link
    In this paper we present a robust, efficient and affordable approach to self-localization which does not require neither GPS nor knowledge about the appearance of the world. Towards this goal, we utilize freely available cartographic maps and derive a probabilistic model that exploits semantic cues in the form of sun direction, presence of an intersection, road type, speed limit as well as the ego-car trajectory in order to produce very reliable localization results. Our experimental evaluation shows that our approach can localize much faster (in terms of driving time) with less computation and more robustly than competing approaches, which ignore semantic information

    Evaluating the Capability of OpenStreetMap for Estimating Vehicle Localization Error

    Get PDF
    Accurate localization is an important part of successful autonomous driving. Recent studies suggest that when using map-based localization methods, the representation and layout of real-world phenomena within the prebuilt map is a source of error. To date, the investigations have been limited to 3D point clouds and normal distribution (ND) maps. This paper explores the potential of using OpenStreetMap (OSM) as a proxy to estimate vehicle localization error. Specifically, the experiment uses random forest regression to estimate mean 3D localization error from map matching using LiDAR scans and ND maps. Six map evaluation factors were defined for 2D geographic information in a vector format. Initial results for a 1.2 km path in Shinjuku, Tokyo, show that vehicle localization error can be estimated with 56.3% model prediction accuracy with two existing OSM data layers only. When OSM data quality issues (inconsistency and completeness) were addressed, the model prediction accuracy was improved to 73.1%

    Towards Outdoor Localization Using GIS, Vision System andStochastic Error Propagation

    Get PDF
    voir basilic : http://emotion.inrialpes.fr/bibemotion/2004/KDDML04/ address: Palmerston North (NZ)International audienceThe paper presents a method for robot localization (the position and attitude of the robot with respect to a model) when a robot is on the move and does not necessarily have good gps coverage. The robot discussed in the paper is a remote controlled vehicle with a GIS database and an onboard camera. The method developed starts with an initial vehicle configuration (steering wheel angle, speed) and an initial point in the GIS mapped to an initial point in the camera's image. Then, for each small displacement of the vehicle, the linear and angular velocities are calculated and a formula developed in the paper for error adjustment is applied if there is a good gps reading. The result of the calculation is used to determine the uncertainty of the location and can be used along with the 3D GIS data to project areas of uncertainty for features of interest onto the camera image. For example, say the GIS data contains fire hydrants and the calculations show that there is a high degree of location uncertainty then the camera image will have an overlay of large ellipses around the fire hydrants whereas a small degree of uncertainty would have smaller ellipses around the fire hydrants. An experiment testing the method is discussed in the paper and there is also a good review of prior work on localization techniques

    Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions

    Get PDF
    Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds. Practical visual localization approaches need to be robust to a wide variety of viewing condition, including day-night changes, as well as weather and seasonal variations, while providing highly accurate 6 degree-of-freedom (6DOF) camera pose estimates. In this paper, we introduce the first benchmark datasets specifically designed for analyzing the impact of such factors on visual localization. Using carefully created ground truth poses for query images taken under a wide variety of conditions, we evaluate the impact of various factors on 6DOF camera pose estimation accuracy through extensive experiments with state-of-the-art localization approaches. Based on our results, we draw conclusions about the difficulty of different conditions, showing that long-term localization is far from solved, and propose promising avenues for future work, including sequence-based localization approaches and the need for better local features. Our benchmark is available at visuallocalization.net.Comment: Accepted to CVPR 2018 as a spotligh
    • …
    corecore