923 research outputs found

    Plane extraction for indoor place recognition

    Get PDF
    In this paper, we present an image based plane extraction method well suited for real-time operations. Our approach exploits the assumption that the surrounding scene is mainly composed by planes disposed in known directions. Planes are detected from a single image exploiting a voting scheme that takes into account the vanishing lines. Then, candidate planes are validated and merged using a region grow- ing based approach to detect in real-time planes inside an unknown in- door environment. Using the related plane homographies is possible to remove the perspective distortion, enabling standard place recognition algorithms to work in an invariant point of view setup. Quantitative Ex- periments performed with real world images show the effectiveness of our approach compared with a very popular method

    Real-time manhattan world rotation estimation in 3D

    Get PDF
    Drift of the rotation estimate is a well known problem in visual odometry systems as it is the main source of positioning inaccuracy. We propose three novel algorithms to estimate the full 3D rotation to the surrounding Manhattan World (MW) in as short as 20 ms using surface-normals derived from the depth channel of a RGB-D camera. Importantly, this rotation estimate acts as a structure compass which can be used to estimate the bias of an odometry system, such as an inertial measurement unit (IMU), and thus remove its angular drift. We evaluate the run-time as well as the accuracy of the proposed algorithms on groundtruth data. They achieve zerodrift rotation estimation with RMSEs below 3.4° by themselves and below 2.8° when integrated with an IMU in a standard extended Kalman filter (EKF). Additional qualitative results show the accuracy in a large scale indoor environment as well as the ability to handle fast motion. Selected segmentations of scenes from the NYU depth dataset demonstrate the robustness of the inference algorithms to clutter and hint at the usefulness of the segmentation for further processing.United States. Office of Naval Research. Multidisciplinary University Research Initiative6 (Awards N00014-11-1-0688 and N00014-10-1-0936)National Science Foundation (U.S.) (Award IIS-1318392

    Unsupervised Visual Odometry and Action Integration for PointGoal Navigation in Indoor Environment

    Full text link
    PointGoal navigation in indoor environment is a fundamental task for personal robots to navigate to a specified point. Recent studies solved this PointGoal navigation task with near-perfect success rate in photo-realistically simulated environments, under the assumptions with noiseless actuation and most importantly, perfect localization with GPS and compass sensors. However, accurate GPS signalis difficult to be obtained in real indoor environment. To improve the PointGoal navigation accuracy without GPS signal, we use visual odometry (VO) and propose a novel action integration module (AIM) trained in unsupervised manner. Sepecifically, unsupervised VO computes the relative pose of the agent from the re-projection error of two adjacent frames, and then replaces the accurate GPS signal with the path integration. The pseudo position estimated by VO is used to train action integration which assists agent to update their internal perception of location and helps improve the success rate of navigation. The training and inference process only use RGB, depth, collision as well as self-action information. The experiments show that the proposed system achieves satisfactory results and outperforms the partially supervised learning algorithms on the popular Gibson dataset.Comment: 12 pages, 6 figure

    Sigma-FP: Robot Mapping of 3D Floor Plans with an RGB-D Camera under Uncertainty

    Get PDF
    This work presents Sigma-FP, a novel 3D reconstruction method to obtain the floor plan of a multi-room environment from a sequence of RGB-D images captured by a wheeled mobile robot. For each input image, the planar patches of visible walls are extracted and subsequently characterized by a multivariate Gaussian distribution in the convenient Plane Parameter Space. Then, accounting for the probabilistic nature of the robot localization, we transform and combine the planar patches from the camera frame into a 3D global model, where the planar patches include both the plane estimation uncertainty and the propagation of the robot pose uncertainty. Additionally, processing depth data, we detect openings (doors and windows) in the wall, which are also incorporated in the 3D global model to provide a more realistic representation. Experimental results, in both real-world and synthetic environments, demonstrate that our method outperforms state-of-the art methods, both in time and accuracy, while just relying on Atlanta world assumption
    • …
    corecore