8 research outputs found

    A new feature parametrization for monocular SLAM using line features

    Full text link
    © 2014 Cambridge University Press. This paper presents a new monocular SLAM algorithm that uses straight lines extracted from images to represent the environment. A line is parametrized by two pairs of azimuth and elevation angles together with the two corresponding camera centres as anchors making the feature initialization relatively straightforward. There is no redundancy in the state vector as this is a minimal representation. A bundle adjustment (BA) algorithm that minimizes the reprojection error of the line features is developed for solving the monocular SLAM problem with only line features. A new map joining algorithm which can automatically optimize the relative scales of the local maps is used to combine the local maps generated using BA. Results from both simulations and experimental datasets are used to demonstrate the accuracy and consistency of the proposed BA and map joining algorithms

    StructVIO : Visual-inertial Odometry with Structural Regularity of Man-made Environments

    Full text link
    We propose a novel visual-inertial odometry approach that adopts structural regularity in man-made environments. Instead of using Manhattan world assumption, we use Atlanta world model to describe such regularity. An Atlanta world is a world that contains multiple local Manhattan worlds with different heading directions. Each local Manhattan world is detected on-the-fly, and their headings are gradually refined by the state estimator when new observations are coming. With fully exploration of structural lines that aligned with each local Manhattan worlds, our visual-inertial odometry method become more accurate and robust, as well as much more flexible to different kinds of complex man-made environments. Through extensive benchmark tests and real-world tests, the results show that the proposed approach outperforms existing visual-inertial systems in large-scale man-made environmentsComment: 15 pages,15 figure

    실내 서비스로봇을 위한 전방 단안카메라 기반 SLAM 시스템

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 전기·컴퓨터공학부, 2017. 8. 조동일.This dissertation presents a new forward-viewing monocular vision-based simultaneous localization and mapping (SLAM) method. The method is developed to be applicable in real-time on a low-cost embedded system for indoor service robots. The developed system utilizes a cost-effective mono-camera as a primary sensor and robot wheel encoders as well as a gyroscope as supplementary sensors. The proposed method is robust in various challenging indoor environments which contain low-textured areas, moving people, or changing environments. In this work, vanishing point (VP) and line features are utilized as landmarks for SLAM. The orientation of a robot is directly estimated using the direction of the VP. Then the estimation models for the robot position and the line landmark are derived as simple linear equations. Using these models, the camera poses and landmark positions are efficiently corrected by a novel local map correction method. To achieve high accuracy in a long-term exploration, a probabilistic loop detection procedure and a pose correction procedure are performed when the robot revisits the previously mapped areas. The performance of the proposed method is demonstrated under various challenging environments using dataset-based experiments using a desktop computer and real-time experiments using a low-cost embedded system. The experimental environments include a real home-like setting and a dedicated Vicon motion-tracking systems equipped space. These conditions contain low-textured areas, moving people, or changing environments. The proposed method is also tested using the RAWSEEDS benchmark dataset.Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 Objectives 10 1.3 Contributions 11 1.4 Organization 12 Chapter 2 Previous works 13 Chapter 3 Methodology 17 3.1 System overview 17 3.2 Manhattan grid and system initialization 23 3.3 Vanishing point based robot orientation estimation 25 3.4 Line landmark position estimation 29 3.5 Camera position estimation 35 3.6 Local map correction 37 3.7 Loop closing 40 3.7.1 Extracting multiple BRIEF-Gist descriptors 40 3.7.2 Data structure for fast comparison 43 3.7.3 Bayesian filtering based loop detection 45 3.7.4 Global pose correction 47 Chapter 4 Experiments 49 4.1 Home environment dataset 51 4.2 Vicon dataset 60 4.3 Benchmark dataset in large scale indoor environment 74 4.4 Embedded real-time SLAM in home environment 79 Chapter 5 Conclusion 82 Appendix: performance evaluation of various loop detection methods in home environmnet 84 Reference 90Docto

    Undelayed initialization of line segments in Monocular SLAM

    No full text
    International audienceThis paper presents 6-DOF monocular EKF-SLAM with undelayed initialization using linear landmarks with extensible endpoints, based on the Plücker parametrization. A careful analysis of the properties of the Plücker coordinates, defined in the projective space P, permits their direct usage for undelayed initialization. Immediately after detection of a segment in the image, a Plücker line is incorporated in the map. A single Gaussian \pdf\ includes inside its 2-sigma region all possible lines given the observed segment, from arbitrarily close up to the infinity range, and in any orientation. The lines converge to stable 3D configurations as the moving camera gathers observations from new viewpoints. The line's endpoints, maintained out of the map, are constantly retro-projected from the image onto the line's local reference frame. An extending-only policy is defined to update them. We validate the method via Monte Carlo simulations and with real imagery data

    Visual Navigation for Robots in Urban and Indoor Environments

    Get PDF
    As a fundamental capability for mobile robots, navigation involves multiple tasks including localization, mapping, motion planning, and obstacle avoidance. In unknown environments, a robot has to construct a map of the environment while simultaneously keeping track of its own location within the map. This is known as simultaneous localization and mapping (SLAM). For urban and indoor environments, SLAM is especially important since GPS signals are often unavailable. Visual SLAM uses cameras as the primary sensor and is a highly attractive but challenging research topic. The major challenge lies in the robustness to lighting variation and uneven feature distribution. Another challenge is to build semantic maps composed of high-level landmarks. To meet these challenges, we investigate feature fusion approaches for visual SLAM. The basic rationale is that since urban and indoor environments contain various feature types such points and lines, in combination these features should improve the robustness, and meanwhile, high-level landmarks can be defined as or derived from these combinations. We design a novel data structure, multilayer feature graph (MFG), to organize five types of features and their inner geometric relationships. Building upon a two view-based MFG prototype, we extend the application of MFG to image sequence-based mapping by using EKF. We model and analyze how errors are generated and propagated through the construction of a two view-based MFG. This enables us to treat each MFG as an observation in the EKF update step. We apply the MFG-EKF method to a building exterior mapping task and demonstrate its efficacy. Two view based MFG requires sufficient baseline to be successfully constructed, which is not always feasible. Therefore, we further devise a multiple view based algorithm to construct MFG as a global map. Our proposed algorithm takes a video stream as input, initializes and iteratively updates MFG based on extracted key frames; it also refines robot localization and MFG landmarks using local bundle adjustment. We show the advantage of our method by comparing it with state-of-the-art methods on multiple indoor and outdoor datasets. To avoid the scale ambiguity in monocular vision, we investigate the application of RGB-D for SLAM.We propose an algorithm by fusing point and line features. We extract 3D points and lines from RGB-D data, analyze their measurement uncertainties, and compute camera motion using maximum likelihood estimation. We validate our method using both uncertainty analysis and physical experiments, where it outperforms the counterparts under both constant and varying lighting conditions. Besides visual SLAM, we also study specular object avoidance, which is a great challenge for range sensors. We propose a vision-based algorithm to detect planar mirrors. We derive geometric constraints for corresponding real-virtual features across images and employ RANSAC to develop a robust detection algorithm. Our algorithm achieves a detection accuracy of 91.0%
    corecore