315 research outputs found

    Evaluating the Capability of OpenStreetMap for Estimating Vehicle Localization Error

    Get PDF
    Accurate localization is an important part of successful autonomous driving. Recent studies suggest that when using map-based localization methods, the representation and layout of real-world phenomena within the prebuilt map is a source of error. To date, the investigations have been limited to 3D point clouds and normal distribution (ND) maps. This paper explores the potential of using OpenStreetMap (OSM) as a proxy to estimate vehicle localization error. Specifically, the experiment uses random forest regression to estimate mean 3D localization error from map matching using LiDAR scans and ND maps. Six map evaluation factors were defined for 2D geographic information in a vector format. Initial results for a 1.2 km path in Shinjuku, Tokyo, show that vehicle localization error can be estimated with 56.3% model prediction accuracy with two existing OSM data layers only. When OSM data quality issues (inconsistency and completeness) were addressed, the model prediction accuracy was improved to 73.1%

    Estimating Autonomous Vehicle Localization Error Using 2D Geographic Information

    Get PDF
    Accurately and precisely knowing the location of the vehicle is a critical requirement for safe and successful autonomous driving. Recent studies suggest that error for map-based localization methods are tightly coupled with the surrounding environment. Considering this relationship, it is therefore possible to estimate localization error by quantifying the representation and layout of real-world phenomena. To date, existing work on estimating localization error have been limited to using self-collected 3D point cloud maps. This paper investigates the use of pre-existing 2D geographic information datasets as a proxy to estimate autonomous vehicle localization error. Seven map evaluation factors were defined for 2D geographic information in a vector format, and random forest regression was used to estimate localization error for five experiment paths in Shinjuku, Tokyo. In the best model, the results show that it is possible to estimate autonomous vehicle localization error with 69.8% of predictions within 2.5 cm and 87.4% within 5 cm

    A dynamic two-dimensional (D2D) weight-based map-matching algorithm

    Get PDF
    Existing map-Matching (MM) algorithms primarily localize positioning fixes along the centerline of a road and have largely ignored road width as an input. Consequently, vehicle lane-level localization, which is essential for stringent Intelligent Transport System (ITS) applications, seems difficult to accomplish, especially with the positioning data from low-cost GPS sensors. This paper aims to address this limitation by developing a new dynamic two-dimensional (D2D) weight-based MM algorithm incorporating dynamic weight coefficients and road width. To enable vehicle lane-level localization, a road segment is virtually expressed as a matrix of homogeneous grids with reference to a road centerline. These grids are then used to map-match positioning fixes as opposed to matching on a road centerline as carried out in traditional MM algorithms. In this developed algorithm, vehicle location identification on a road segment is based on the total weight score which is a function of four different weights: (i) proximity, (ii) kinematic, (iii) turn-intent prediction, and (iv) connectivity. Different parameters representing network complexity and positioning quality are used to assign the relative importance to different weight scores by employing an adaptive regression method. To demonstrate the transferability of the developed algorithm, it was tested by using 5,830 GPS positioning points collected in Nottingham, UK and 7,414 GPS positioning points collected in Mumbai and Pune, India. The developed algorithm, using stand-alone GPS position fixes, identifies the correct links 96.1% (for the Nottingham data) and 98.4% (for the Mumbai-Pune data) of the time. In terms of the correct lane identification, the algorithm was found to provide the accurate matching for 84% (Nottingham) and 79% (Mumbai-Pune) of the fixes obtained by stand-alone GPS. Using the same methodology adopted in this study, the accuracy of the lane identification could further be enhanced if the localization data from additional sensors (e.g. gyroscope) are utilized. ITS industry and vehicle manufacturers can implement this D2D map-matching algorithm for liability critical and in-vehicle information systems and services such as advanced driver assistant systems (ADAS)

    Analysing the effects of sensor fusion, maps and trust models on autonomous vehicle satellite navigation positioning

    Get PDF
    This thesis analyzes the effects of maps, sensor fusion and trust models on autonomous vehicle satellite positioning. The aim is to analyze the localization improvements that commonly used sensors, technologies and techniques provide to autonomous vehicle positioning. This thesis includes both survey of localization techniques used by other research and their localization accuracy results as well as experimentation where the effects of different technologies and techniques on lateral position accuracy are reviewed. The requirements for safe autonomous driving are strict and while the performance of the average global navigation satellite system (GNSS) receiver alone may not prove to be adequate enough for accurate positioning, it may still provide valuable position data to an autonomous vehicle. For the vehicle, this position data may provide valuable information about the absolute position on the globe, it may improve localization accuracy through sensor fusion and it may act as an independent data source for sensor trust evaluation. Through empirical experimentation, the effects of sensor fusion and trust functions with an inertial measurement unit (IMU) on GNSS lateral position accuracy are measured and analyzed. The experimentation includes the measurements from both consumer-grade devices mounted on a traditional automobile and high-end devices of a truck that is capable of autonomous driving in a monitored environment. The maps and LIDAR measurements used in the experiments are prone to errors and are taken into account in the analysis of the data

    Visual Place Recognition in Changing Environments

    Get PDF
    Localization is an essential capability of mobile robots and place recognition is an important component of localization. Only having precise localization, robots can reliably plan, navigate and understand the environment around them. The main task of visual place recognition algorithms is to recognize based on the visual input if the robot has seen previously a given place in the environment. Cameras are one of the popular sensors robots get information from. They are lightweight, affordable, and provide detailed descriptions of the environment in the form of images. Cameras are shown to be useful for the vast variety of emerging applications, from virtual and augmented reality applications to autonomous cars or even fleets of autonomous cars. All these applications need precise localization. Nowadays, the state-of-the-art methods are able to reliably estimate the position of the robots using image streams. One of the big challenges still is the ability to localize a camera given an image stream in the presence of drastic visual appearance changes in the environment. Visual appearance changes may be caused by a variety of different reasons, starting from camera-related factors, such as changes in exposure time, camera position-related factors, e.g. the scene is observed from a different position or viewing angle, occlusions, as well as factors that stem from natural sources, for example seasonal changes, different weather conditions, illumination changes, etc. These effects change the way the same place in the environments appears in the image and can lead to situations where it becomes hard even for humans to recognize the places. Also, the performance of the traditional visual localization approaches, such as FABMAP or DBow, decreases dramatically in the presence of strong visual appearance changes. The techniques presented in this thesis aim at improving visual place recognition capabilities for robotic systems in the presence of dramatic visual appearance changes. To reduce the effect of visual changes on image matching performance, we exploit sequences of images rather than individual images. This becomes possible as robotic systems collect data sequentially and not in random order. We formulate the visual place recognition problem under strong appearance changes as a problem of matching image sequences collected by a robotic system at different points in time. A key insight here is the fact that matching sequences reduces the ambiguities in the data associations. This allows us to establish image correspondences between different sequences and thus recognize if two images represent the same place in the environment. To perform a search for image correspondences, we construct a graph that encodes the potential matches between the sequences and at the same time preserves the sequentiality of the data. The shortest path through such a data association graph provides the valid image correspondences between the sequences. Robots operating reliably in an environment should be able to recognize a place in an online manner and not after having recorded all data beforehand. As opposed to collecting image sequences and then determining the associations between the sequences offline, a real-world system should be able to make a decision for every incoming image. In this thesis, we therefore propose an algorithm that is able to perform visual place recognition in changing environments in an online fashion between the query and the previously recorded reference sequences. Then, for every incoming query image, our algorithm checks if the robot is in the previously seen environment, i.e. there exists a matching image in the reference sequence, as well as if the current measurement is consistent with previously obtained query images. Additionally, to be able to recognize places in an online manner, a robot needs to recognize the fact that it has left the previously mapped area as well as relocalize when it re-enters environment covered by the reference sequence. Thus, we relax the assumption that the robot should always travel within the previously mapped area and propose an improved graph-based matching procedure that allows for visual place recognition in case of partially overlapping image sequences. To achieve a long-term autonomy, we further increase the robustness of our place recognition algorithm by incorporating information from multiple image sequences, collected along different overlapping and non-overlapping routes. This allows us to grow the coverage of the environment in terms of area as well as various scene appearances. The reference dataset then contains more images to match against and this increases the probability of finding a matching image, which can lead to improved localization. To be able to deploy a robot that performs localization in large scaled environments over extended periods of time, however, collecting a reference dataset may be a tedious, resource consuming and in some cases intractable task. Avoiding an explicit map collection stage fosters faster deployment of robotic systems in the real world since no map has to be collected beforehand. By using our visual place recognition approach the map collection stage can be skipped, as we are able to incorporate the information from a publicly available source, e.g., from Google Street View, into our framework due to its general formulation. This automatically enables us to perform place recognition on already existing publicly available data and thus avoid costly mapping phase. In this thesis, we additionally show how to organize the images from the publicly available source into the sequences to perform out-of-the-box visual place recognition without previously collecting the otherwise required reference image sequences at city scale. All approaches described in this thesis have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software

    Smartphone-based vehicle telematics: a ten-year anniversary

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this recordJust as it has irrevocably reshaped social life, the fast growth of smartphone ownership is now beginning to revolutionize the driving experience and change how we think about automotive insurance, vehicle safety systems, and traffic research. This paper summarizes the first ten years of research in smartphone-based vehicle telematics, with a focus on user-friendly implementations and the challenges that arise due to the mobility of the smartphone. Notable academic and industrial projects are reviewed, and system aspects related to sensors, energy consumption, and human-machine interfaces are examined. Moreover, we highlight the differences between traditional and smartphone-based automotive navigation, and survey the state of the art in smartphone-based transportation mode classification, vehicular ad hoc networks, cloud computing, driver classification, and road condition monitoring. Future advances are expected to be driven by improvements in sensor technology, evidence of the societal benefits of current implementations, and the establishment of industry standards for sensor fusion and driver assessment

    Camera localization using trajectories and maps

    Get PDF
    We propose a new Bayesian framework for automatically determining the position (location and orientation) of an uncalibrated camera using the observations of moving objects and a schematic map of the passable areas of the environment. Our approach takes advantage of static and dynamic information on the scene structures through prior probability distributions for object dynamics. The proposed approach restricts plausible positions where the sensor can be located while taking into account the inherent ambiguity of the given setting. The proposed framework samples from the posterior probability distribution for the camera position via data driven MCMC, guided by an initial geometric analysis that restricts the search space. A Kullback-Leibler divergence analysis is then used that yields the final camera position estimate, while explicitly isolating ambiguous settings. The proposed approach is evaluated in synthetic and real environments, showing its satisfactory performance in both ambiguous and unambiguous settings
    • 

    corecore