210 research outputs found

    Large Scale 3D Mapping of Indoor Environments Using a Handheld RGBD Camera

    Get PDF
    The goal of this research is to investigate the problem of reconstructing a 3D representation of an environment, of arbitrary size, using a handheld color and depth (RGBD) sensor. The focus of this dissertation is to examine four of the underlying subproblems to this system: camera tracking, loop closure, data storage, and integration. First, a system for 3D reconstruction of large indoor planar environments with data captured from an RGBD sensor mounted on a mobile robotic platform is presented. An algorithm for constructing nearly drift-free 3D occupancy grids of large indoor environments in an online manner is also presented. This approach combines data from an odometry sensor with output from a visual registration algorithm, and it enforces a Manhattan world constraint by utilizing factor graphs to produce an accurate online estimate of the trajectory of the mobile robotic platform. Through several experiments in environments with varying sizes and construction it is shown that this method reduces rotational and translational drift significantly without performing any loop closing techniques. In addition the advantages and limitations of an octree data structure representation of a 3D environment is examined. Second, the problem of sensor tracking, specifically the use of the KinectFusion algorithm to align two subsequent point clouds generated by an RGBD sensor, is studied. A method to overcome a significant limitation of the Iterative Closest Point (ICP) algorithm used in KinectFusion is proposed, namely, its sole reliance upon geometric information. The proposed method uses both geometric and color information in a direct manner that uses all the data in order to accurately estimate camera pose. Data association is performed by computing a warp between the two color images associated with two RGBD point clouds using the Lucas-Kanade algorithm. A subsequent step then estimates the transformation between the point clouds using either a point-to-point or point-to-plane error metric. Scenarios in which each of these metrics fails are described, and a normal covariance test for automatically selecting between them is proposed. Together, Lucas-Kanade data association (LKDA) along with covariance testing enables robust camera tracking through areas of low geometrical features, while at the same time retaining accuracy in environments in which the existing ICP technique succeeds. Experimental results on several publicly available datasets demonstrate the improved performance both qualitatively and quantitatively. Third, the choice of state space in the context of performing loop closure is revisited. Although a relative state space has been discounted by previous authors, it is shown that such a state space is actually extremely powerful, able to achieve recognizable results after just one iteration. The power behind the technique is that changing the orientation of one node is able to affect other nodes. At the same time, the approach --- which is referred to as Pose Optimization using a Relative State Space (POReSS) --- is fast because, like the more popular incremental state space, the Jacobian never needs to be explicitly computed. Furthermore, it is shown that while POReSS is able to quickly compute a solution near the global optimum, it is not precise enough to perform the fine adjustments necessary to achieve acceptable results. As a result, a method to augment POReSS with a fast variant of Gauss-Seidel --- which is referred to as Graph-Seidel --- on a global state space to allow the solution to settle closer to the global minimum is proposed. Through a set of experiments, it is shown that this combination of POReSS and Graph-Seidel is not only faster but achieves a lower residual than other non-linear algebra techniques. Moreover, unlike the linear algebra-based techniques, it is shown that this approach scales to very large graphs. In addition to revisiting the idea of using a relative state space, the benefits of only optimizing the rotational components of a trajectory in order to perform loop closing is examined (rPOReSS). Finally, an incremental implementation of the rotational optimization is proposed (irPOReSS)

    Direct Sparse Mapping

    Get PDF
    Photometric bundle adjustment (PBA) accurately estimates geometry from video. However, current PBA systems have a temporary map that cannot manage scene reobservations. We present, direct sparse mapping, a full monocular visual simultaneous localization and mapping (SLAM) based on PBA. Its persistent map handles reobservations, yielding the most accurate results up to date on EuRoC for a direct method

    Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems

    Full text link
    This paper presents Kimera-Multi, the first multi-robot system that (i) is robust and capable of identifying and rejecting incorrect inter and intra-robot loop closures resulting from perceptual aliasing, (ii) is fully distributed and only relies on local (peer-to-peer) communication to achieve distributed localization and mapping, and (iii) builds a globally consistent metric-semantic 3D mesh model of the environment in real-time, where faces of the mesh are annotated with semantic labels. Kimera-Multi is implemented by a team of robots equipped with visual-inertial sensors. Each robot builds a local trajectory estimate and a local mesh using Kimera. When communication is available, robots initiate a distributed place recognition and robust pose graph optimization protocol based on a novel distributed graduated non-convexity algorithm. The proposed protocol allows the robots to improve their local trajectory estimates by leveraging inter-robot loop closures while being robust to outliers. Finally, each robot uses its improved trajectory estimate to correct the local mesh using mesh deformation techniques. We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking datasets, and challenging outdoor datasets collected using ground robots. Both real and simulated experiments involve long trajectories (e.g., up to 800 meters per robot). The experiments show that Kimera-Multi (i) outperforms the state of the art in terms of robustness and accuracy, (ii) achieves estimation errors comparable to a centralized SLAM system while being fully distributed, (iii) is parsimonious in terms of communication bandwidth, (iv) produces accurate metric-semantic 3D meshes, and (v) is modular and can be also used for standard 3D reconstruction (i.e., without semantic labels) or for trajectory estimation (i.e., without reconstructing a 3D mesh).Comment: Accepted by IEEE Transactions on Robotics (18 pages, 15 figures

    Real-Time Accurate Visual SLAM with Place Recognition

    Get PDF
    El problema de localización y construcción simultánea de mapas (del inglés Simultaneous Localization and Mapping, abreviado SLAM) consiste en localizar un sensor en un mapa que se construye en línea. La tecnología de SLAM hace posible la localización de un robot en un entorno desconocido para él, procesando la información de sus sensores de a bordo y por tanto sin depender de infraestructuras externas. Un mapa permite localizarse en todo momento sin acumular deriva, a diferencia de una odometría donde se integran movimientos incrementales. Este tipo de tecnología es crítica para la navegación de robots de servicio y vehículos autónomos, o para la localización del usuario en aplicaciones de realidad aumentada o virtual. La principal contribución de esta tesis es ORB-SLAM, un sistema de SLAM monocular basado en características que trabaja en tiempo real en ambientes pequeños y grandes, de interior y exterior. El sistema es robusto a elementos dinámicos en la escena, permite cerrar bucles y relocalizar la cámara incluso si el punto de vista ha cambiado significativamente, e incluye un método de inicialización completamente automático. ORB-SLAM es actualmente la solución más completa, precisa y fiable de SLAM monocular empleando una cámara como único sensor. El sistema, estando basado en características y ajuste de haces, ha demostrado una precisión y robustez sin precedentes en secuencias públicas estándar.Adicionalmente se ha extendido ORB-SLAM para reconstruir el entorno de forma semi-densa. Nuestra solución desacopla la reconstrucción semi-densa de la estimación de la trayectoria de la cámara, lo que resulta en un sistema que combina la precisión y robustez del SLAM basado en características con las reconstrucciones más completas de los métodos directos. Además se ha extendido la solución monocular para aprovechar la información de cámaras estéreo, RGB-D y sensores inerciales, obteniendo precisiones superiores a otras soluciones del estado del arte. Con el fin de contribuir a la comunidad científica, hemos hecho libre el código de una implementación de nuestra solución de SLAM para cámaras monoculares, estéreo y RGB-D, siendo la primera solución de código libre capaz de funcionar con estos tres tipos de cámara. Bibliografía:R. Mur-Artal and J. D. Tardós.Fast Relocalisation and Loop Closing in Keyframe-Based SLAM.IEEE International Conference on Robotics and Automation (ICRA). Hong Kong, China, June 2014.R. Mur-Artal and J. D. Tardós.ORB-SLAM: Tracking and Mapping Recognizable Features.RSS Workshop on Multi VIew Geometry in RObotics (MVIGRO). Berkeley, USA, July 2014. R. Mur-Artal and J. D. Tardós.Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM.Robotics: Science and Systems (RSS). Rome, Italy, July 2015.R. Mur-Artal, J. M. M. Montiel and J. D. Tardós.ORB-SLAM: A Versatile and Accurate Monocular SLAM System.IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, October 2015.(2015 IEEE Transactions on Robotics Best Paper Award).R. Mur-Artal, and J. D. Tardós.Visual-Inertial Monocular SLAM with Map Reuse.IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 796-803, April 2017. (to be presented at ICRA 17).R.Mur-Artal, and J. D. Tardós. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras.ArXiv preprint arXiv:1610.06475, 2016. (under Review).<br /

    Depth sensors in augmented reality solutions. Literature review

    Get PDF
    The emergence of depth sensors has made it possible to track – not only monocular cues – but also the actual depth values of the environment. This is especially useful in augmented reality solutions, where the position and orientation (pose) of the observer need to be accurately determined. This allows virtual objects to be installed to the view of the user through, for example, a screen of a tablet or augmented reality glasses (e.g. Google glass, etc.). Although the early 3D sensors have been physically quite large, the size of these sensors is decreasing, and possibly – eventually – a 3D sensor could be embedded – for example – to augmented reality glasses. The wider subject area considered in this review is 3D SLAM methods, which take advantage of the 3D information available by modern RGB-D sensors, such as Microsoft Kinect. Thus the review for SLAM (Simultaneous Localization and Mapping) and 3D tracking in augmented reality is a timely subject. We also try to find out the limitations and possibilities of different tracking methods, and how they should be improved, in order to allow efficient integration of the methods to the augmented reality solutions of the future.Siirretty Doriast
    corecore