401 research outputs found

    Improving SLAM with Drift Integration

    Get PDF
    International audienceLocalization without prior knowledge can be a difficult task for a vehicle. An answer to this problematic lies in the Simultaneous Localization And Mapping (SLAM) approach where a map of the surroundings is built while simultaneously being used for localization purposes. However, SLAM algorithms tend to drift over time, making the localization inconsistent. In this paper, we propose to model the drift as a localization bias and to integrate it in a general architecture. The latter allows any feature-based SLAM algorithm to be used while taking advantage of the drift integration. Based on previous works, we extend the bias concept and propose a new architecture which drastically improves the performance of our method, both in terms of computational power and memory required. We validate this framework on real data with different scenarios. We show that taking into account the drift allows us to maintain consistency and improve the localization accuracy with almost no additional cost

    Robust Self-Tuning Data Association for Geo-Referencing Using Lane Markings

    Get PDF
    Localization in aerial imagery-based maps offers many advantages, such as global consistency, geo-referenced maps, and the availability of publicly accessible data. However, the landmarks that can be observed from both aerial imagery and on-board sensors is limited. This leads to ambiguities or aliasing during the data association. Building upon a highly informative representation (that allows efficient data association), this paper presents a complete pipeline for resolving these ambiguities. Its core is a robust self-tuning data association that adapts the search area depending on a pseudo-entropy of the measurements. Additionally, to smooth the final result, we adjust the information matrix for the associated data as a function of the relative transform produced by the data association process. We evaluate our method on real data from urban and rural scenarios around the city of Karlsruhe in Germany. We compare state-of-the-art outlier mitigation methods with our self-tuning approach, demonstrating a considerable improvement, especially for outer-urban scenarios.This work was supported by the Regional Valencian Community Government and the European Regional Development Fund (ERDF) through the Project PROMETEO/2021/075, and under Grants ACIF/2019/088 and BEFPI/2021/069

    CES-515 Towards Localization and Mapping of Autonomous Underwater Vehicles: A Survey

    Get PDF
    Autonomous Underwater Vehicles (AUVs) have been used for a huge number of tasks ranging from commercial, military and research areas etc, while the fundamental function of a successful AUV is its localization and mapping ability. This report aims to review the relevant elements of localization and mapping for AUVs. First, a brief introduction of the concept and the historical development of AUVs is given; then a relatively detailed description of the sensor system used for AUV navigation is provided. As the main part of the report, a comprehensive investigation of the simultaneous localization and mapping (SLAM) for AUVs are conducted, including its application examples. Finally a brief conclusion is summarized

    Exploiting Sparse Semantic HD Maps for Self-Driving Vehicle Localization

    Full text link
    In this paper we propose a novel semantic localization algorithm that exploits multiple sensors and has precision on the order of a few centimeters. Our approach does not require detailed knowledge about the appearance of the world, and our maps require orders of magnitude less storage than maps utilized by traditional geometry- and LiDAR intensity-based localizers. This is important as self-driving cars need to operate in large environments. Towards this goal, we formulate the problem in a Bayesian filtering framework, and exploit lanes, traffic signs, as well as vehicle dynamics to localize robustly with respect to a sparse semantic map. We validate the effectiveness of our method on a new highway dataset consisting of 312km of roads. Our experiments show that the proposed approach is able to achieve 0.05m lateral accuracy and 1.12m longitudinal accuracy on average while taking up only 0.3% of the storage required by previous LiDAR intensity-based approaches.Comment: 8 pages, 4 figures, 4 tables, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019

    Vision-based navigation with reality-based 3D maps

    Full text link
    This research is focused on developing vision-based navigation system for positioning and navigation in GPS degraded environments. The main research contributions are summarized as follows: a. A new concept of 3D map, which mainly consists of geo-referenced images, has been introduced. In this research, it provides the map-matching function for vision-based positioning. b. A method of vision-based positioning with use of photogrammetric methodologies has been proposed. It mainly obtains geometric information of the navigation environment from the 3D map through SIFT based image matching and uses photogrammetric space resection to solve the position in 6 degrees of freedom. The algorithms have been tested in an indoor environment. The accuracy has reached around 10 cm. c. A multi-level outlier detection scheme for the vision-based navigation system has been developed. It mainly combines RANSAC with data snooping. The former one deals with high percentage of mismatches, while data snooping removes outliers from different sources in the least squares adjustment for both 3D mapping and positioning solution. d. The deficiency of using RANSAC for outlier detection in image matching and homography estimation has been identified. In this research, a novel method which combines cross correlation with feature based image matching has been proposed. It is able to evaluate the RANSAC homography estimation and improve the image matching performance. The method has been successfully applied to the vision-based navigation solution to find corresponding view from the database and improve the final positioning accuracy. e. The positioning performance of the system has been evaluated through the analysis of mathematical model and experiments. The focus has been on various image matching conditions/methods and their impact on the system performance. The strength and weaknesses of the system have been revealed and investigated. f. The vision-based navigation system has been extended from indoor to outdoor with corresponding changes. Besides camera, it also takes advantage of multiple built-in sensors, including GPS receiver and a digital compass to assist visual methods in outdoor environments. Experiments demonstrate that such system can largely improve the position accuracy in areas where stand-alone GPS is affected and can be easily adopted on mobile devic

    Large-area visually augmented navigation for autonomous underwater vehicles

    Get PDF
    Submitted to the Joint Program in Applied Ocean Science & Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate the sparsification methodology employed by sparse extended information filters (SEIFs) and offer new insight as to why, and how, its approximation can lead to inconsistencies in the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through the Department of Defense

    Automated Map Reading: Image Based Localisation in 2-D Maps Using Binary Semantic Descriptors

    Get PDF
    We describe a novel approach to image based localisation in urban environments using semantic matching between images and a 2-D map. It contrasts with the vast majority of existing approaches which use image to image database matching. We use highly compact binary descriptors to represent semantic features at locations, significantly increasing scalability compared with existing methods and having the potential for greater invariance to variable imaging conditions. The approach is also more akin to human map reading, making it more suited to human-system interaction. The binary descriptors indicate the presence or not of semantic features relating to buildings and road junctions in discrete viewing directions. We use CNN classifiers to detect the features in images and match descriptor estimates with a database of location tagged descriptors derived from the 2-D map. In isolation, the descriptors are not sufficiently discriminative, but when concatenated sequentially along a route, their combination becomes highly distinctive and allows localisation even when using non-perfect classifiers. Performance is further improved by taking into account left or right turns over a route. Experimental results obtained using Google StreetView and OpenStreetMap data show that the approach has considerable potential, achieving localisation accuracy of around 85% using routes corresponding to approximately 200 meters.Comment: 8 pages, submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems 201
    corecore