118,537 research outputs found

    ElasticFusion: real-time dense SLAM and light source estimation

    No full text
    We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments and beyond explored using an RGB-D camera in an incremental online fashion, without pose graph optimisation or any post-processing steps. This is accomplished by using dense frame-tomodel camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay close to the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency. In the spirit of improving map quality as well as tracking accuracy and robustness, we furthermore explore a novel approach to real-time discrete light source detection. This technique is capable of detecting numerous light sources in indoor environments in real-time as a user handheld camera explores the scene. Absolutely no prior information about the scene or number of light sources is required. By making a small set of simple assumptions about the appearance properties of the scene our method can incrementally estimate both the quantity and location of multiple light sources in the environment in an online fashion. Our results demonstrate that our technique functions well in many different environments and lighting configurations. We show that this enables (a) more realistic augmented reality (AR) rendering; (b) a richer understanding of the scene beyond pure geometry and; (c) more accurate and robust photometric trackin

    Evaluation of HoloLens Tracking and Depth Sensing for Indoor Mapping Applications

    Get PDF
    The Microsoft HoloLens is a head-worn mobile augmented reality device that is capable of mapping its direct environment in real-time as triangle meshes and localize itself within these three-dimensional meshes simultaneously. The device is equipped with a variety of sensors including four tracking cameras and a time-of-flight (ToF) range camera. Sensor images and their poses estimated by the built-in tracking system can be accessed by the user. This makes the HoloLens potentially interesting as an indoor mapping device. In this paper, we introduce the different sensors of the device and evaluate the complete system in respect of the task of mapping indoor environments. The overall quality of such a system depends mainly on the quality of the depth sensor together with its associated pose derived from the tracking system. For this purpose, we first evaluate the performance of the HoloLens depth sensor and its tracking system separately. Finally, we evaluate the overall system regarding its capability for mapping multi-room environments

    Sigma-FP: Robot Mapping of 3D Floor Plans with an RGB-D Camera under Uncertainty

    Get PDF
    This work presents Sigma-FP, a novel 3D reconstruction method to obtain the floor plan of a multi-room environment from a sequence of RGB-D images captured by a wheeled mobile robot. For each input image, the planar patches of visible walls are extracted and subsequently characterized by a multivariate Gaussian distribution in the convenient Plane Parameter Space. Then, accounting for the probabilistic nature of the robot localization, we transform and combine the planar patches from the camera frame into a 3D global model, where the planar patches include both the plane estimation uncertainty and the propagation of the robot pose uncertainty. Additionally, processing depth data, we detect openings (doors and windows) in the wall, which are also incorporated in the 3D global model to provide a more realistic representation. Experimental results, in both real-world and synthetic environments, demonstrate that our method outperforms state-of-the art methods, both in time and accuracy, while just relying on Atlanta world assumption

    Perception-aware Path Planning

    Full text link
    In this paper, we give a double twist to the problem of planning under uncertainty. State-of-the-art planners seek to minimize the localization uncertainty by only considering the geometric structure of the scene. In this paper, we argue that motion planning for vision-controlled robots should be perception aware in that the robot should also favor texture-rich areas to minimize the localization uncertainty during a goal-reaching task. Thus, we describe how to optimally incorporate the photometric information (i.e., texture) of the scene, in addition to the the geometric one, to compute the uncertainty of vision-based localization during path planning. To avoid the caveats of feature-based localization systems (i.e., dependence on feature type and user-defined thresholds), we use dense, direct methods. This allows us to compute the localization uncertainty directly from the intensity values of every pixel in the image. We also describe how to compute trajectories online, considering also scenarios with no prior knowledge about the map. The proposed framework is general and can easily be adapted to different robotic platforms and scenarios. The effectiveness of our approach is demonstrated with extensive experiments in both simulated and real-world environments using a vision-controlled micro aerial vehicle.Comment: 16 pages, 20 figures, revised version. Conditionally accepted for IEEE Transactions on Robotic
    • …
    corecore