545 research outputs found

    Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison

    No full text
    Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRC) global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data

    Information-based view initialization in visual SLAM with a single omnidirectional camera

    Full text link
    © 2015 Elsevier B.V. All rights reserved. This paper presents a novel mechanism to initiate new views within the map building process for an EKF-based visual SLAM (Simultaneous Localization and Mapping) approach using omnidirectional images. In presence of non-linearities, the EKF is very likely to compromise the final estimation. Particularly, the omnidirectional observation model induces non-linear errors, thus it becomes a potential source of uncertainty. To deal with this issue we propose a novel mechanism for view initialization which accounts for information gain and losses more efficiently. The main outcome of this contribution is the reduction of the map uncertainty and thus the higher consistency of the final estimation. Its basis relies on a Gaussian Process to infer an information distribution model from sensor data. This model represents feature points existence probabilities and their information content analysis leads to the proposed view initialization scheme. To demonstrate the suitability and effectiveness of the approach we present a series of real data experiments conducted with a robot equipped with a camera sensor and map model solely based on omnidirectional views. The results reveal a beneficial reduction on the uncertainty but also on the error in the pose and the map estimate

    Improving Omnidirectional Camera-Based Robot Localization Through Self-Supervised Learning

    Get PDF
    Autonomous agents in any environment require accurate and reliable position and motion estimation to complete their required tasks. Many different sensor modalities have been utilized for this task such as GPS, ultra-wide band, visual simultaneous localization and mapping (SLAM), and light detection and ranging (LiDAR) SLAM. Many of the traditional positioning systems do not take advantage of the recent advances in the machine learning field. In this work, an omnidirectional camera position estimation system relying primarily on a learned model is presented. The positioning system benefits from the wide field of view provided by an omnidirectional camera. Recent developments in the self-supervised learning field for generating useful features from unlabeled data are also assessed. A novel radial patch pretext task for omnidirectional images is presented in this work. The resulting implementation will be a robot localization and tracking algorithm that can be adapted to a variety of environments such as warehouses and college campuses. Further experiments with additional types of sensors including 3D LiDAR, 60 GHz wireless, and Ultra-Wideband localization systems utilizing machine learning are also explored. A fused learned localization model utilizing multiple sensor modalities is evaluated in comparison to individual sensor models
    corecore