826 research outputs found

    Combining Edge and One-Point RANSAC Algorithm to Estimate Visual Odometry

    Get PDF
    In recent years, classical structure from motion based SLAM has achieved significant results. Omnidirectional camera-based motion estimation has become interested researchers due to the lager field of view. This paper proposes a method to estimate the 2D motion of a vehicle and mapping by using EKF based on edge matching and one point RANSAC. Edge matching based azimuth rotation estimation is used as pseudo prior information for EKF predicting state vector. In order to reduce requirement parameters for motion estimation and reconstruction, the vehicle moves under nonholonomic constraints car-like structured motion model assumption. The experiments were carried out using an electric vehicle with an omnidirectional camera mounted on the roof. In order to evaluate the motion estimation, the vehicle positions were compared with GPS information and superimposed onto aerial images collected by Google map API. The experimental results showed that the method based on EKF without using prior rotation information given error is about 1.9 times larger than our proposed method.In recent years, classical structure from motion based SLAM has achieved significant results. Omnidirectional camera-based motion estimation has become interested researchers due to the lager field of view. This paper proposes a method to estimate the 2D motion of a vehicle and mapping by using EKF based on edge matching and one point RANSAC. Edge matching based azimuth rotation estimation is used as pseudo prior information for EKF predicting state vector. In order to reduce requirement parameters for motion estimation and reconstruction, the vehicle moves under nonholonomic constraints car-like structured motion model assumption. The experiments were carried out using an electric vehicle with an omnidirectional camera mounted on the roof. In order to evaluate the motion estimation, the vehicle positions were compared with GPS information and superimposed onto aerial images collected by Google map API. The experimental results showed that the method based on EKF without using prior rotation information given error is about 1.9 times larger than our proposed method

    Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments

    Full text link
    Existing simultaneous localization and mapping (SLAM) algorithms are not robust in challenging low-texture environments because there are only few salient features. The resulting sparse or semi-dense map also conveys little information for motion planning. Though some work utilize plane or scene layout for dense map regularization, they require decent state estimation from other sources. In this paper, we propose real-time monocular plane SLAM to demonstrate that scene understanding could improve both state estimation and dense mapping especially in low-texture environments. The plane measurements come from a pop-up 3D plane model applied to each single image. We also combine planes with point based SLAM to improve robustness. On a public TUM dataset, our algorithm generates a dense semantic 3D model with pixel depth error of 6.2 cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our method creates a much better 3D model with state estimation error of 0.67%.Comment: International Conference on Intelligent Robots and Systems (IROS) 201

    Robust Stereo Visual Odometry through a Probabilistic Combination of Points and Line Segments

    Get PDF
    Most approaches to stereo visual odometry reconstruct the motion based on the tracking of point features along a sequence of images. However, in low-textured scenes it is often difficult to encounter a large set of point features, or it may happen that they are not well distributed over the image, so that the behavior of these algorithms deteriorates. This paper proposes a probabilistic approach to stereo visual odometry based on the combination of both point and line segment that works robustly in a wide variety of scenarios. The camera motion is recovered through non-linear minimization of the projection errors of both point and line segment features. In order to effectively combine both types of features, their associated errors are weighted according to their covariance matrices, computed from the propagation of Gaussian distribution errors in the sensor measurements. The method, of course, is computationally more expensive that using only one type of feature, but still can run in real-time on a standard computer and provides interesting advantages, including a straightforward integration into any probabilistic framework commonly employed in mobile robotics.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Project "PROMOVE: Advances in mobile robotics for promoting independent life of elders", funded by the Spanish Government and the "European Regional Development Fund ERDF" under contract DPI2014-55826-R

    Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty

    Full text link
    This work proposes a robust visual odometry method for structured environments that combines point features with line and plane segments, extracted through an RGB-D camera. Noisy depth maps are processed by a probabilistic depth fusion framework based on Mixtures of Gaussians to denoise and derive the depth uncertainty, which is then propagated throughout the visual odometry pipeline. Probabilistic 3D plane and line fitting solutions are used to model the uncertainties of the feature parameters and pose is estimated by combining the three types of primitives based on their uncertainties. Performance evaluation on RGB-D sequences collected in this work and two public RGB-D datasets: TUM and ICL-NUIM show the benefit of using the proposed depth fusion framework and combining the three feature-types, particularly in scenes with low-textured surfaces, dynamic objects and missing depth measurements.Comment: Major update: more results, depth filter released as opensource, 34 page

    Real-Time Indoor Localization using Visual and Inertial Odometry

    Get PDF
    This project encompassed the design of a mobile, real-time localization device for use in an indoor environment. A system was designed and constructed using visual and inertial odometry methods to meet the project requirements. Stereoscopic image features were detected through a C++ Sobel filter implementation and matched. An inertial measurement unit (IMU) provided raw acceleration and rotation coordinates which were transformed into a global frame of reference. A Kalman filter produced motion approximations from the input data and transmitted the Kalman position state coordinates via a radio transceiver to a remote base station. This station used a graphical user interface to map the incoming coordinates

    Visual SLAM for flying vehicles

    Get PDF
    The ability to learn a map of the environment is important for numerous types of robotic vehicles. In this paper, we address the problem of learning a visual map of the ground using flying vehicles. We assume that the vehicles are equipped with one or two low-cost downlooking cameras in combination with an attitude sensor. Our approach is able to construct a visual map that can later on be used for navigation. Key advantages of our approach are that it is comparably easy to implement, can robustly deal with noisy camera images, and can operate either with a monocular camera or a stereo camera system. Our technique uses visual features and estimates the correspondences between features using a variant of the progressive sample consensus (PROSAC) algorithm. This allows our approach to extract spatial constraints between camera poses that can then be used to address the simultaneous localization and mapping (SLAM) problem by applying graph methods. Furthermore, we address the problem of efficiently identifying loop closures. We performed several experiments with flying vehicles that demonstrate that our method is able to construct maps of large outdoor and indoor environments. © 2008 IEEE
    corecore