521 research outputs found

    Probabilistic Combination of Noisy Points and Planes for RGB-D Odometry

    Full text link
    This work proposes a visual odometry method that combines points and plane primitives, extracted from a noisy depth camera. Depth measurement uncertainty is modelled and propagated through the extraction of geometric primitives to the frame-to-frame motion estimation, where pose is optimized by weighting the residuals of 3D point and planes matches, according to their uncertainties. Results on an RGB-D dataset show that the combination of points and planes, through the proposed method, is able to perform well in poorly textured environments, where point-based odometry is bound to fail.Comment: Accepted to TAROS 201

    Concurrent Initialization for Bearing-Only SLAM

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is perhaps the most fundamental problem to solve in robotics in order to build truly autonomous mobile robots. The sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of range sensors as sonar rings or lasers. However, cameras have become more and more used, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. Unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a novel and robust method, called Concurrent Initialization, is presented which is inspired by having the complementary advantages of the Undelayed and Delayed methods that represent the most common approaches for addressing the problem. The key is to use concurrently two kinds of feature representations for both undelayed and delayed stages of the estimation. The simulations results show that the proposed method surpasses the performance of previous schemes

    Vision and SLAM on a highly dynamic mobile two-wheeled robot

    Get PDF
    This thesis examines a sparse feature based visual monocular simultaneous localization and mapping (SLAM) approach with the intension of stabilizing a two-wheeled balancing robot. The first part introduces the basics like camera geometry, image processing and filtering. Further on, the thesis treats the details of a monocular SLAM system and shows some specialties to keep the computational effort low. The last part deals with Andrew Davison's "SceneLib" library and how it can be used to obtain the camera state vector.Die vorliegende Arbeit gibt einen Einblick in das Thema der auf wenigen Bildfeatures basierenden simultanen Lokalisierung und Karten Erstellung (SLAM) mittels monokularer Kamera zum Zwecke der Regelung eines zweirädrigen balancierenden Roboters. Im ersten Teil werden grundlegende Themen wie die Kamerageometrie, Bildverarbeitung und Filtertechniken besprochen. Darauf aufbauend werden im zweiten Abschnitt Details und effizienzsteigernde Maßnahmen erläutert, die ein monokulares Echtzeit-Kamera-SLAM System möglich machen. Im letzten Teil der Arbeit wird beschrieben wie mittels Andrew Davisons "SceneLib" Bibliothek die aktuelle Kamera Pose bestimmt werden kann

    Depth-Camera-Aided Inertial Navigation Utilizing Directional Constraints.

    Full text link
    This paper presents a practical yet effective solution for integrating an RGB-D camera and an inertial sensor to handle the depth dropouts that frequently happen in outdoor environments, due to the short detection range and sunlight interference. In depth drop conditions, only the partial 5-degrees-of-freedom pose information (attitude and position with an unknown scale) is available from the RGB-D sensor. To enable continuous fusion with the inertial solutions, the scale ambiguous position is cast into a directional constraint of the vehicle motion, which is, in essence, an epipolar constraint in multi-view geometry. Unlike other visual navigation approaches, this can effectively reduce the drift in the inertial solutions without delay or under small parallax motion. If a depth image is available, a window-based feature map is maintained to compute the RGB-D odometry, which is then fused with inertial outputs in an extended Kalman filter framework. Flight results from the indoor and outdoor environments, as well as public datasets, demonstrate the improved navigation performance of the proposed approach

    Simultaneous Localization and Mapping (SLAM) on NAO

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a navigation and mapping method used by autonomous robots and moving vehicles. SLAM is mainly concerned with the problem of building a map in an unknown environment and concurrently navigating through the environment using the map. Localization is of utmost importance to allow the robot to keep track of its position with respect to the environment and the common use of odometry proves to be unreliable. SLAM has been proposed as a solution by previous research to provide more accurate localization and mapping on robots. This project involves the implementation of the SLAM algorithm in the humanoid robot NAO by Aldebaran Robotics. The SLAM technique will be implemented using vision from the single camera attached to the robot to map and localize the position of NAO in the environment. The result details the attempt to implement specifically the chosen algorithm, 1-Point RANSAC Inverse Depth EKF Monocular SLAM by Dr Javier Civera on the robot NAO. The algorithm is shown to perform well for smooth motions but on the humanoid NAO, the sudden changes in motion produces undesirable results.This study on SLAM will be useful as this technique can be widely used to allow mobile robots to map and navigate in areas which are deemed unsafe for humans

    Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty

    Full text link
    This work proposes a robust visual odometry method for structured environments that combines point features with line and plane segments, extracted through an RGB-D camera. Noisy depth maps are processed by a probabilistic depth fusion framework based on Mixtures of Gaussians to denoise and derive the depth uncertainty, which is then propagated throughout the visual odometry pipeline. Probabilistic 3D plane and line fitting solutions are used to model the uncertainties of the feature parameters and pose is estimated by combining the three types of primitives based on their uncertainties. Performance evaluation on RGB-D sequences collected in this work and two public RGB-D datasets: TUM and ICL-NUIM show the benefit of using the proposed depth fusion framework and combining the three feature-types, particularly in scenes with low-textured surfaces, dynamic objects and missing depth measurements.Comment: Major update: more results, depth filter released as opensource, 34 page

    Visual SLAM for flying vehicles

    Get PDF
    The ability to learn a map of the environment is important for numerous types of robotic vehicles. In this paper, we address the problem of learning a visual map of the ground using flying vehicles. We assume that the vehicles are equipped with one or two low-cost downlooking cameras in combination with an attitude sensor. Our approach is able to construct a visual map that can later on be used for navigation. Key advantages of our approach are that it is comparably easy to implement, can robustly deal with noisy camera images, and can operate either with a monocular camera or a stereo camera system. Our technique uses visual features and estimates the correspondences between features using a variant of the progressive sample consensus (PROSAC) algorithm. This allows our approach to extract spatial constraints between camera poses that can then be used to address the simultaneous localization and mapping (SLAM) problem by applying graph methods. Furthermore, we address the problem of efficiently identifying loop closures. We performed several experiments with flying vehicles that demonstrate that our method is able to construct maps of large outdoor and indoor environments. © 2008 IEEE
    • …
    corecore