204 research outputs found

    Appearance-based localization for mobile robots using digital zoom and visual compass

    Get PDF
    This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally

    Appearance-based heading estimation:The visual compass

    Get PDF

    Combined visual odometry and visual compass for off-road mobile robots localization

    Get PDF
    In this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at the surrounding environment. Comparisons with popular localization approaches, through physical experiments in off-road conditions, have shown the satisfactory behavior of the proposed strateg

    Uncalibrated Visual Compass from Omnidirectional Line Images with Application to Attitude MAV Estimation

    No full text
    International audienceThis paper presents a new algorithm based on previous results of the authors, for the estimation of the yaw angle of an omnidirectional camera robot undergoing a 6-DoF rigid motion. Our real-time algorithm is uncalibrated, robust to noisy data, and it only relies on the projection of 3-D parallel lines as image features. Numerical and real-world experiments conducted with an eye-in-hand robot manipulator, which we used to simulate the 3-D motion of a Micro unmanned Aerial Vehicle (MAV), show the accuracy and reliability of our estimation algorithm

    Humanoid robot navigation: getting localization information from vision

    Get PDF
    International audienceIn this article, we present our work to provide a navigation and localization system on a constrained humanoid platform, the NAO robot, without modifying the robot sensors. First we try to implement a simple and light version of classical monocular Simultaneous Localization and Mapping (SLAM) algorithms, while adapting to the CPU and camera quality, which turns out to be insufficient on the platform for the moment. From our work on keypoints tracking, we identify that some keypoints can be still accurately tracked at little cost, and use them to build a visual compass. This compass is then used to correct the robot walk, because it makes it possible to control the robot orientation accurately

    Using deep autoencoders to investigate image matching in visual navigation

    Get PDF
    This paper discusses the use of deep autoencoder networks to find a compressed representation of an image, which can be used for visual naviga-tion. Images reconstructed from the compressed representation are tested to see if they retain enough information to be used as a visual compass (in which an image is matched with another to recall a bearing/movement direction) as this ability is at the heart of a visual route navigation algorithm. We show that both reconstructed images and compressed representations from different layers of the autoencoder can be used in this way, suggesting that a compact image code is sufficient for visual navigation and that deep networks hold promise for find-ing optimal visual encodings for this task

    Sky segmentation with ultraviolet images can be used for navigation

    Get PDF
    Inspired by ant navigation, we explore a method for sky segmentation using ultraviolet (UV) light. A standard camera is adapted to allow collection of outdoor images containing light in the visible range, in UV only and in green only. Automatic segmentation of the sky region using UV only is significantly more accurate and far more consistent than visible wavelengths over a wide range of locations, times and weather conditions, and can be accomplished with a very low complexity algorithm. We apply this method to obtain compact binary (sky vs non-sky) images from panoramic UV images taken along a 2km route in an urban environment. Using either sequence SLAM or a visual compass on these images produces reliable localisation and orientation on a subsequent traversal of the route under different weather conditions
    • 

    corecore