9 research outputs found

    Concurrent Initialization for Bearing-Only SLAM

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is perhaps the most fundamental problem to solve in robotics in order to build truly autonomous mobile robots. The sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of range sensors as sonar rings or lasers. However, cameras have become more and more used, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. Unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a novel and robust method, called Concurrent Initialization, is presented which is inspired by having the complementary advantages of the Undelayed and Delayed methods that represent the most common approaches for addressing the problem. The key is to use concurrently two kinds of feature representations for both undelayed and delayed stages of the estimation. The simulations results show that the proposed method surpasses the performance of previous schemes

    Robot Collaboration for Simultaneous Map Building and Localization

    Get PDF

    Monocular SLAM for Visual Odometry: A Full Approach to the Delayed Inverse-Depth Feature Initialization Method

    Get PDF
    This paper describes in a detailed manner a method to implement a simultaneous localization and mapping (SLAM) system based on monocular vision for applications of visual odometry, appearance-based sensing, and emulation of range-bearing measurements. SLAM techniques are required to operate mobile robots in a priori unknown environments using only on-board sensors to simultaneously build a map of their surroundings; this map will be needed for the robot to track its position. In this context, the 6-DOF (degree of freedom) monocular camera case (monocular SLAM) possibly represents the harder variant of SLAM. In monocular SLAM, a single camera, which is freely moving through its environment, represents the sole sensory input to the system. The method proposed in this paper is based on a technique called delayed inverse-depth feature initialization, which is intended to initialize new visual features on the system. In this work, detailed formulation, extended discussions, and experiments with real data are presented in order to validate and to show the performance of the proposal

    SLAM con mediciones angulares: método por triangulación estocástica

    Get PDF
    El SLAM (simultaneous localization and mapping) es una técnica en la cual un robot o vehículo autónomo opera en un entorno a priori desconocido, utilizando únicamente sus sensores de abordo, mientras construye un mapa de su entorno, el cual utiliza al mismo tiempo para localizarse. Los sensores tienen un gran impacto en los algoritmos usados en SLAM. Una cámara es un sensor proyectivo que mide el ángulo (bearing) respecto a los elementos de la imagen, por lo que la profundidad o rango no puede ser obtenida mediante una sola medición. Lo anterior ha motivado la aparición de una nueva familia de métodos en SLAM: los métodos de SLAM basados en sensores angulares, los cuales están principalmente basados en técnicas especiales para la inicialización de características en el sistema, permitiendo el uso de sensores angulares (como cámaras) en SLAM. Este artículo presenta un método práctico para la inicialización de nuevas características en sistemas de SLAM basados en sensores angulares.Peer ReviewedPostprint (published version

    Experimental Comparison of Techniques for Localization and Mapping Using A Bearing-Only Sensor

    No full text
    We present a comparison of an extended Kalman filter and an adaptation of bundle adjustment from computer vision for mobile robot localization and mapping using a bearing-only sensor. We show results on synthetic and real examples and discuss some advantages and disadvantages of the techniques. The comparison leads to a novel combination of the two techniques which results in computational complexity near Kalman filters and performance near bundle adjustment on the examples shown
    corecore