970 research outputs found

    Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping

    Full text link
    Modern 3D laser-range scanners have a high data rate, making online simultaneous localization and mapping (SLAM) computationally challenging. Recursive state estimation techniques are efficient but commit to a state estimate immediately after a new scan is made, which may lead to misalignments of measurements. We present a 3D SLAM approach that allows for refining alignments during online mapping. Our method is based on efficient local mapping and a hierarchical optimization back-end. Measurements of a 3D laser scanner are aggregated in local multiresolution maps by means of surfel-based registration. The local maps are used in a multi-level graph for allocentric mapping and localization. In order to incorporate corrections when refining the alignment, the individual 3D scans in the local map are modeled as a sub-graph and graph optimization is performed to account for drift and misalignments in the local maps. Furthermore, in each sub-graph, a continuous-time representation of the sensor trajectory allows to correct measurements between scan poses. We evaluate our approach in multiple experiments by showing qualitative results. Furthermore, we quantify the map quality by an entropy-based measure.Comment: In: Proceedings of the International Conference on Robotics and Automation (ICRA) 201

    Efficient 3D Segmentation, Registration and Mapping for Mobile Robots

    Get PDF
    Sometimes simple is better! For certain situations and tasks, simple but robust methods can achieve the same or better results in the same or less time than related sophisticated approaches. In the context of robots operating in real-world environments, key challenges are perceiving objects of interest and obstacles as well as building maps of the environment and localizing therein. The goal of this thesis is to carefully analyze such problem formulations, to deduce valid assumptions and simplifications, and to develop simple solutions that are both robust and fast. All approaches make use of sensors capturing 3D information, such as consumer RGBD cameras. Comparative evaluations show the performance of the developed approaches. For identifying objects and regions of interest in manipulation tasks, a real-time object segmentation pipeline is proposed. It exploits several common assumptions of manipulation tasks such as objects being on horizontal support surfaces (and well separated). It achieves real-time performance by using particularly efficient approximations in the individual processing steps, subsampling the input data where possible, and processing only relevant subsets of the data. The resulting pipeline segments 3D input data with up to 30Hz. In order to obtain complete segmentations of the 3D input data, a second pipeline is proposed that approximates the sampled surface, smooths the underlying data, and segments the smoothed surface into coherent regions belonging to the same geometric primitive. It uses different primitive models and can reliably segment input data into planes, cylinders and spheres. A thorough comparative evaluation shows state-of-the-art performance while computing such segmentations in near real-time. The second part of the thesis addresses the registration of 3D input data, i.e., consistently aligning input captured from different view poses. Several methods are presented for different types of input data. For the particular application of mapping with micro aerial vehicles where the 3D input data is particularly sparse, a pipeline is proposed that uses the same approximate surface reconstruction to exploit the measurement topology and a surface-to-surface registration algorithm that robustly aligns the data. Optimization of the resulting graph of determined view poses then yields globally consistent 3D maps. For sequences of RGBD data this pipeline is extended to include additional subsampling steps and an initial alignment of the data in local windows in the pose graph. In both cases, comparative evaluations show a robust and fast alignment of the input data

    Mesh-based 3D Textured Urban Mapping

    Get PDF
    In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201

    Building with Drones: Accurate 3D Facade Reconstruction using MAVs

    Full text link
    Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition. In this paper, we present and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We also propose a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark. Further, we evaluate our system on real outdoor scenes, and show that our interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and Automation (ICRA '15), Seattle, WA, US

    LiDAR based multi-sensor fusion for localization, mapping, and tracking

    Get PDF
    Viimeisen vuosikymmenen aikana täysin itseohjautuvien ajoneuvojen kehitys on herättänyt laajaa kiinnostusta niin teollisuudessa kuin tiedemaailmassakin, mikä on merkittävästi edistänyt tilannetietoisuuden ja anturiteknologian kehitystä. Erityisesti LiDAR-anturit ovat nousseet keskeiseen rooliin monissa havainnointijärjestelmissä niiden tarjoaman pitkän kantaman havaintokyvyn, tarkan 3D-etäisyystiedon ja luotettavan suorituskyvyn ansiosta. LiDAR-teknologian kehittyminen on mahdollistanut entistä luotettavampien ja kustannustehokkaampien antureiden käytön, mikä puolestaan on osoittanut suurta potentiaalia parantaa laajasti käytettyjen kuluttajatuotteiden tilannetietoisuutta. Uusien LiDAR-antureiden hyödyntäminen tarjoaa tutkijoille monipuolisen valikoiman tehokkaita työkaluja, joiden avulla voidaan ratkaista paikannuksen, kartoituksen ja seurannan haasteita nykyisissä havaintojärjestelmissä. Tässä väitöskirjassa tutkitaan LiDAR-pohjaisia sensorifuusioalgoritmeja. Tutkimuksen pääpaino on tiheässä kartoituksessa ja globaalissa paikan-nuksessa erilaisten LiDAR-anturien avulla. Tutkimuksessa luodaan kattava tietokanta uusien LiDAR-, IMU- ja kamera-antureiden tuottamasta datasta. Tietokanta on välttämätön kehittyneiden anturifuusioalgoritmien ja yleiskäyttöisten paikannus- ja kartoitusalgoritmien kehittämiseksi. Tämän lisäksi väitöskirjassa esitellään innovatiivisia menetelmiä globaaliin paikannukseen erilaisissa ympäristöissä. Esitellyt menetelmät kartoituksen tarkkuuden ja tilannetietoisuuden parantamiseksi ovat muun muassa modulaarinen monen LiDAR-anturin odometria ja kartoitus, toimintavarma multimodaalinen LiDAR-inertiamittau-sjärjestelmä ja tiheä kartoituskehys. Tutkimus integroi myös kiinteät LiDAR -anturit kamerapohjaisiin syväoppimismenetelmiin kohteiden seurantaa varten parantaen kartoituksen tarkkuutta dynaamisissa ympäristöissä. Näiden edistysaskeleiden avulla autonomisten järjestelmien luotettavuutta ja tehokkuutta voidaan merkittävästi parantaa todellisissa käyttöympäristöissä. Väitöskirja alkaa esittelemällä innovatiiviset anturit ja tiedonkeruualustan. Tämän jälkeen esitellään avoin tietokanta, jonka avulla voidaan arvioida kehittyneitä paikannus- ja kartoitusalgoritmeja hyödyntäen ainutlaatuista perustotuuden kehittämismenetelmää. Työssä käsitellään myös kahta haastavaa paikannusympäristöä: metsä- ja kaupunkiympäristöä. Lisäksi tarkastellaan kohteen seurantatehtäviä sekä kameraettä LiDAR-tekniikoilla ihmisten ja pienten droonien seurannassa. ---------------------- The development of fully autonomous driving vehicles has become a key focus for both industry and academia over the past decade, fostering significant progress in situational awareness abilities and sensor technology. Among various types of sensors, the LiDAR sensor has emerged as a pivotal component in many perception systems due to its long-range detection capabilities, precise 3D range information, and reliable performance in diverse environments. With advancements in LiDAR technology, more reliable and cost-effective sensors have shown great potential for improving situational awareness abilities in widely used consumer products. By leveraging these novel LiDAR sensors, researchers now have a diverse set of powerful tools to effectively tackle the persistent challenges in localization, mapping, and tracking within existing perception systems. This thesis explores LiDAR-based sensor fusion algorithms to address perception challenges in autonomous systems, with a primary focus on dense mapping and global localization using diverse LiDAR sensors. The research involves the integration of novel LiDARs, IMU, and camera sensors to create a comprehensive dataset essential for developing advanced sensor fusion and general-purpose localization and mapping algorithms. Innovative methodologies for global localization across varied environments are introduced. These methodologies include a robust multi-modal LiDAR inertial odometry and a dense mapping framework, which enhance mapping precision and situational awareness. The study also integrates solid-state LiDARs with camera-based deep-learning techniques for object tracking, refining mapping accuracy in dynamic environments. These advancements significantly enhance the reliability and efficiency of autonomous systems in real-world scenarios. The thesis commences with an introduction to innovative sensors and a data collection platform. It proceeds by presenting an open-source dataset designed for the evaluation of advanced SLAM algorithms, utilizing a unique ground-truth generation method. Subsequently, the study tackles two localization challenges in forest and urban environments. Furthermore, it highlights the MM-LOAM dense mapping framework. Additionally, the research explores object-tracking tasks, employing both camera and LiDAR technologies for human and micro UAV tracking
    corecore