27 research outputs found

    PVI-DSO: Leveraging Planar Regularities for Direct Sparse Visual-Inertial Odometry

    Full text link
    The monocular Visual-Inertial Odometry (VIO) based on the direct method can leverage all the available pixels in the image to estimate the camera motion and reconstruct the environment. The denser map reconstruction provides more information about the environment, making it easier to extract structure and planar regularities. In this paper, we propose a monocular direct sparse visual-inertial odometry, which exploits the plane regularities (PVI-DSO). Our system detects coplanar information from 3D meshes generated from 3D point clouds and uses coplanar parameters to introduce coplanar constraints. In order to reduce computation and improve compactness, the plane-distance cost is directly used as the prior information of plane parameters. We conduct ablation experiments on public datasets and compare our system with other state-of-the-art algorithms. The experimental results verified leveraging the plane information can improve the accuracy of the VIO system based on the direct method

    Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities

    Full text link
    Visual-Inertial Odometry (VIO) algorithms typically rely on a point cloud representation of the scene that does not model the topology of the environment. A 3D mesh instead offers a richer, yet lightweight, model. Nevertheless, building a 3D mesh out of the sparse and noisy 3D landmarks triangulated by a VIO algorithm often results in a mesh that does not fit the real scene. In order to regularize the mesh, previous approaches decouple state estimation from the 3D mesh regularization step, and either limit the 3D mesh to the current frame or let the mesh grow indefinitely. We propose instead to tightly couple mesh regularization and state estimation by detecting and enforcing structural regularities in a novel factor-graph formulation. We also propose to incrementally build the mesh by restricting its extent to the time-horizon of the VIO optimization; the resulting 3D mesh covers a larger portion of the scene than a per-frame approach while its memory usage and computational complexity remain bounded. We show that our approach successfully regularizes the mesh, while improving localization accuracy, when structural regularities are present, and remains operational in scenes without regularities.Comment: 7 pages, 5 figures, ICRA accepte

    SPINS: Structure Priors aided Inertial Navigation System

    Full text link
    Although Simultaneous Localization and Mapping (SLAM) has been an active research topic for decades, current state-of-the-art methods still suffer from instability or inaccuracy due to feature insufficiency or its inherent estimation drift, in many civilian environments. To resolve these issues, we propose a navigation system combing the SLAM and prior-map-based localization. Specifically, we consider additional integration of line and plane features, which are ubiquitous and more structurally salient in civilian environments, into the SLAM to ensure feature sufficiency and localization robustness. More importantly, we incorporate general prior map information into the SLAM to restrain its drift and improve the accuracy. To avoid rigorous association between prior information and local observations, we parameterize the prior knowledge as low dimensional structural priors defined as relative distances/angles between different geometric primitives. The localization is formulated as a graph-based optimization problem that contains sliding-window-based variables and factors, including IMU, heterogeneous features, and structure priors. We also derive the analytical expressions of Jacobians of different factors to avoid the automatic differentiation overhead. To further alleviate the computation burden of incorporating structural prior factors, a selection mechanism is adopted based on the so-called information gain to incorporate only the most effective structure priors in the graph optimization. Finally, the proposed framework is extensively tested on synthetic data, public datasets, and, more importantly, on the real UAV flight data obtained from a building inspection task. The results show that the proposed scheme can effectively improve the accuracy and robustness of localization for autonomous robots in civilian applications.Comment: 14 pages, 14 figure

    Visual simultaneous localisation and mapping for sewer pipe networks leveraging cylindrical regularity

    Get PDF
    This work proposes a novel visual Simultaneous Localisation and Mapping (vSLAM) approach for robots in sewer pipe networks. One problem of vSLAM in pipes is that the scale drifts and accuracy degrades. We propose the use of structural information to mitigate this problem via cylindrical regularity. The main novelty consists of an approach for cylinder detection that is more robust than previous methods in non-smooth sewer pipe environments. Cylindrical regularity is then incorporated into both local bundle adjustment and pose graph optimisation. The approach adopts a minimal cylinder representation with only five parameters, avoiding constraints during the optimisation in vSLAM. A further novelty is that the estimated cylinder is part of the scale drift estimation, which enables a correction to the translation estimate and this further improves the accuracy. The approach, termed Cylindrical Regularity ORB-SLAM (CRORB), is benchmarked and compared to leading visual SLAM algorithms ORB-SLAM2 and direct sparse odometry (DSO), as well as a vSLAM algorithm with cylindrical regularity developed for gas pipes, using real sewer pipe data and synthetic data generated with the Gazebo modelling software. The results demonstrate that CRORB improves substantially over the competitors, with a reduction of approximately 70% in error on real data

    PlaneSLAM: Plane-based LiDAR SLAM for Motion Planning in Structured 3D Environments

    Full text link
    LiDAR sensors are a powerful tool for robot simultaneous localization and mapping (SLAM) in unknown environments, but the raw point clouds they produce are dense, computationally expensive to store, and unsuited for direct use by downstream autonomy tasks, such as motion planning. For integration with motion planning, it is desirable for SLAM pipelines to generate lightweight geometric map representations. Such representations are also particularly well-suited for man-made environments, which can often be viewed as a so-called "Manhattan world" built on a Cartesian grid. In this work we present a 3D LiDAR SLAM algorithm for Manhattan world environments which extracts planar features from point clouds to achieve lightweight, real-time localization and mapping. Our approach generates plane-based maps which occupy significantly less memory than their point cloud equivalents, and are suited towards fast collision checking for motion planning. By leveraging the Manhattan world assumption, we target extraction of orthogonal planes to generate maps which are more structured and organized than those of existing plane-based LiDAR SLAM approaches. We demonstrate our approach in the high-fidelity AirSim simulator and in real-world experiments with a ground rover equipped with a Velodyne LiDAR. For both cases, we are able to generate high quality maps and trajectory estimates at a rate matching the sensor rate of 10 Hz

    RD-VIO: Robust Visual-Inertial Odometry for Mobile Augmented Reality in Dynamic Environments

    Full text link
    It is typically challenging for visual or visual-inertial odometry systems to handle the problems of dynamic scenes and pure rotation. In this work, we design a novel visual-inertial odometry (VIO) system called RD-VIO to handle both of these two problems. Firstly, we propose an IMU-PARSAC algorithm which can robustly detect and match keypoints in a two-stage process. In the first state, landmarks are matched with new keypoints using visual and IMU measurements. We collect statistical information from the matching and then guide the intra-keypoint matching in the second stage. Secondly, to handle the problem of pure rotation, we detect the motion type and adapt the deferred-triangulation technique during the data-association process. We make the pure-rotational frames into the special subframes. When solving the visual-inertial bundle adjustment, they provide additional constraints to the pure-rotational motion. We evaluate the proposed VIO system on public datasets. Experiments show the proposed RD-VIO has obvious advantages over other methods in dynamic environments

    Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras

    Full text link
    This paper demonstrates a visual SLAM system that utilizes point and line cloud for robust camera localization, simultaneously, with an embedded piece-wise planar reconstruction (PPR) module which in all provides a structural map. To build a scale consistent map in parallel with tracking, such as employing a single camera brings the challenge of reconstructing geometric primitives with scale ambiguity, and further introduces the difficulty in graph optimization of bundle adjustment (BA). We address these problems by proposing several run-time optimizations on the reconstructed lines and planes. The system is then extended with depth and stereo sensors based on the design of the monocular framework. The results show that our proposed SLAM tightly incorporates the semantic features to boost both frontend tracking as well as backend optimization. We evaluate our system exhaustively on various datasets, and open-source our code for the community (https://github.com/PeterFWS/Structure-PLP-SLAM).Comment: The pre-print version, v2 add supplementary materials, code open-source: https://github.com/PeterFWS/Structure-PLP-SLA
    corecore