37 research outputs found

    Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison

    No full text
    Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRC) global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data

    Precise Bearing Angle Measurement Based on Omnidirectional Conic Sensor and Defocusing

    Get PDF
    Recent studies on multi-robot localization have shown that the uncertainty of robot location may be considerably reduced by optimally fusing odometry and the relative angles of sight (bearing) among the team members. However, the latter requires the capability for each robot of detecting the other members up to large distances and wide field of view. Furthermore, robustness and precision in estimating the relative angle of sight is of high importance. In this paper we show how all of the these requirements may be achieved by employing an omnidirectional sensor made up of a conic mirror and a simple webcam. We use different colored lights to distinguish the robots and optical defocusing to identify the lights. We show that defocusing increases the detection range up to several meters, compensating the decay of resolution related to the omnidirectional view, without losing robustness and precision. To allow a real time implementation of light tracking, we use a recent tree-based union find technique for color segmentation and region merging. We also present a self-calibration technique based on an Extended Kalman Filter to derive the intrinsic parameters of the robot-sensor system. The performance of the approach is shown through experimental results

    Structure from motion using omni-directional vision and certainty grids

    Get PDF
    This thesis describes a method to create local maps from an omni-directional vision system (ODVS) mounted on a mobile robot. Range finding is performed by a structure-from-motion method, which recovers the three-dimensional position of objects in the environment from omni-directional images. This leads to map-making, which is accomplished using certainty grids to fuse information from multiple readings into a two-dimensional world model. The system is demonstrated both on noise-free data from a custom-built simulator and on real data from an omni-directional vision system on-board a mobile robot. Finally, to account for the particular error characteristics of a real omni-directional vision sensor, a new sensor model for the certainty grid framework is also created and compared to the traditional sonar sensor model

    Sound Localization for Robot Navigation

    Get PDF
    Non

    Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison

    Full text link

    Système de stéréovision panoramique

    Get PDF
    Pour obtenir une reconstruction complète de son environnement un robot mobile a besoin d'une vision globale. La vision panoramique contrairement à la vision classique répond pleinement à ce problème. Ce papier présente un aperçu général des capteurs optiques utilisés à ce jour en vision panoramique ainsi que notre solution à ce problème. Nous présentons un nouveau système de stéréovision panoramique passive basée sur la rotation de deux barrettes CCD. L'objectif est de déterminer la géométrie des scènes observées en utilisant des techniques d'appariements qui utilisent les simplifications introduites par l'architecture du système pour apparier les primitives et cela en respectant une contrainte de temps réel
    corecore