778 research outputs found

    Pose Normalization of Indoor Mapping Datasets Partially Compliant with the Manhattan World Assumption

    Get PDF
    In this paper, we present a novel pose normalization method for indoor mapping point clouds and triangle meshes that is robust against large fractions of the indoor mapping geometries deviating from an ideal Manhattan World structure. In the case of building structures that contain multiple Manhattan World systems, the dominant Manhattan World structure supported by the largest fraction of geometries is determined and used for alignment. In a first step, a vertical alignment orienting a chosen axis to be orthogonal to horizontal floor and ceiling surfaces is conducted. Subsequently, a rotation around the resulting vertical axis is determined that aligns the dataset horizontally with the coordinate axes. The proposed method is evaluated quantitatively against several publicly available indoor mapping datasets. Our implementation of the proposed procedure along with code for reproducing the evaluation will be made available to the public upon acceptance for publication

    Real-time manhattan world rotation estimation in 3D

    Get PDF
    Drift of the rotation estimate is a well known problem in visual odometry systems as it is the main source of positioning inaccuracy. We propose three novel algorithms to estimate the full 3D rotation to the surrounding Manhattan World (MW) in as short as 20 ms using surface-normals derived from the depth channel of a RGB-D camera. Importantly, this rotation estimate acts as a structure compass which can be used to estimate the bias of an odometry system, such as an inertial measurement unit (IMU), and thus remove its angular drift. We evaluate the run-time as well as the accuracy of the proposed algorithms on groundtruth data. They achieve zerodrift rotation estimation with RMSEs below 3.4° by themselves and below 2.8° when integrated with an IMU in a standard extended Kalman filter (EKF). Additional qualitative results show the accuracy in a large scale indoor environment as well as the ability to handle fast motion. Selected segmentations of scenes from the NYU depth dataset demonstrate the robustness of the inference algorithms to clutter and hint at the usefulness of the segmentation for further processing.United States. Office of Naval Research. Multidisciplinary University Research Initiative6 (Awards N00014-11-1-0688 and N00014-10-1-0936)National Science Foundation (U.S.) (Award IIS-1318392

    PlaneSLAM: Plane-based LiDAR SLAM for Motion Planning in Structured 3D Environments

    Full text link
    LiDAR sensors are a powerful tool for robot simultaneous localization and mapping (SLAM) in unknown environments, but the raw point clouds they produce are dense, computationally expensive to store, and unsuited for direct use by downstream autonomy tasks, such as motion planning. For integration with motion planning, it is desirable for SLAM pipelines to generate lightweight geometric map representations. Such representations are also particularly well-suited for man-made environments, which can often be viewed as a so-called "Manhattan world" built on a Cartesian grid. In this work we present a 3D LiDAR SLAM algorithm for Manhattan world environments which extracts planar features from point clouds to achieve lightweight, real-time localization and mapping. Our approach generates plane-based maps which occupy significantly less memory than their point cloud equivalents, and are suited towards fast collision checking for motion planning. By leveraging the Manhattan world assumption, we target extraction of orthogonal planes to generate maps which are more structured and organized than those of existing plane-based LiDAR SLAM approaches. We demonstrate our approach in the high-fidelity AirSim simulator and in real-world experiments with a ground rover equipped with a Velodyne LiDAR. For both cases, we are able to generate high quality maps and trajectory estimates at a rate matching the sensor rate of 10 Hz

    StructVIO : Visual-inertial Odometry with Structural Regularity of Man-made Environments

    Full text link
    We propose a novel visual-inertial odometry approach that adopts structural regularity in man-made environments. Instead of using Manhattan world assumption, we use Atlanta world model to describe such regularity. An Atlanta world is a world that contains multiple local Manhattan worlds with different heading directions. Each local Manhattan world is detected on-the-fly, and their headings are gradually refined by the state estimator when new observations are coming. With fully exploration of structural lines that aligned with each local Manhattan worlds, our visual-inertial odometry method become more accurate and robust, as well as much more flexible to different kinds of complex man-made environments. Through extensive benchmark tests and real-world tests, the results show that the proposed approach outperforms existing visual-inertial systems in large-scale man-made environmentsComment: 15 pages,15 figure

    A Mixture of Manhattan Frames: Beyond the Manhattan World

    Get PDF
    Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches to scene representation exploit this phenomenon via the somewhat restrictive assumption that every plane is perpendicular to one of the axes of a single coordinate system. Known as the Manhattan-World model, this assumption is widely used in computer vision and robotics. The complexity of many real-world scenes, however, necessitates a more flexible model. We propose a novel probabilistic model that describes the world as a mixture of Manhattan frames: each frame defines a different orthogonal coordinate system. This results in a more expressive model that still exploits the orthogonality constraints. We propose an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that utilizes the geometry of the unit sphere. We demonstrate the versatility of our Mixture-of-Manhattan-Frames model by describing complex scenes using depth images of indoor scenes as well as aerial-LiDAR measurements of an urban center. Additionally, we show that the model lends itself to focal-length calibration of depth cameras and to plane segmentation.United States. Office of Naval Research. Multidisciplinary University Research Initiative (Award N00014-11-1-0688)United States. Defense Advanced Research Projects Agency (Award FA8650-11-1-7154)Technion, Israel Institute of Technology (MIT Postdoctoral Fellowship Program
    corecore