6 research outputs found

    CNN for IMU Assisted Odometry Estimation using Velodyne LiDAR

    Full text link
    We introduce a novel method for odometry estimation using convolutional neural networks from 3D LiDAR scans. The original sparse data are encoded into 2D matrices for the training of proposed networks and for the prediction. Our networks show significantly better precision in the estimation of translational motion parameters comparing with state of the art method LOAM, while achieving real-time performance. Together with IMU support, high quality odometry estimation and LiDAR data registration is realized. Moreover, we propose alternative CNNs trained for the prediction of rotational motion parameters while achieving results also comparable with state of the art. The proposed method can replace wheel encoders in odometry estimation or supplement missing GPS data, when the GNSS signal absents (e.g. during the indoor mapping). Our solution brings real-time performance and precision which are useful to provide online preview of the mapping results and verification of the map completeness in real time

    Robust and Fast 3D Scan Alignment using Mutual Information

    Full text link
    This paper presents a mutual information (MI) based algorithm for the estimation of full 6-degree-of-freedom (DOF) rigid body transformation between two overlapping point clouds. We first divide the scene into a 3D voxel grid and define simple to compute features for each voxel in the scan. The two scans that need to be aligned are considered as a collection of these features and the MI between these voxelized features is maximized to obtain the correct alignment of scans. We have implemented our method with various simple point cloud features (such as number of points in voxel, variance of z-height in voxel) and compared the performance of the proposed method with existing point-to-point and point-to- distribution registration methods. We show that our approach has an efficient and fast parallel implementation on GPU, and evaluate the robustness and speed of the proposed algorithm on two real-world datasets which have variety of dynamic scenes from different environments

    AUTOMATIC REGISTRATION OF APPROXIMATELY LEVELED POINT CLOUDS OF URBAN SCENES

    Get PDF

    Appariement automatique de modèles 3D à des images omnidirectionnelles pour des applications en réalité augmentée urbaine

    Get PDF
    L'un des plus grands défis de la réalité augmentée consiste à aligner parfaitement les informations réelles et virtuelles pour donner l'illusion que ces informations virtuelles font partie intégrante du monde réel. Pour ce faire, il faut estimer avec précision la position et l'orientation de l'utilisateur, et ce, en temps réel. L'augmentation de scènes extérieures est particulièrement problématique, car il n'existe pas de technologie suffisamment précise pour permettre d'assurer un suivi de la position de l'utilisateur au niveau de qualité requis pour des applications en ingénierie. Pour éviter ce problème, nous nous sommes attardés à l'augmentation de panoramas omnidirectionnels pris à une position fixe. L'objectif de ce projet est de proposer une méthode robuste et automatique d'initialisation permettant de calculer la pose de panoramas omnidirectionnels urbains pour ainsi obtenir un alignement parfait des panoramas et des informations virtuelles.One of the greatest challenges of augmented reality is to perfectly synchronize real and virtual information to give the illusion that virtual information are an integral part of real world. To do so, we have to precisely estimate the user position and orientation and, even more dificult, it has to be done in real time. Augmentation of outdoor scenes is particularly problematic because there are no technologies accurate enough to get user position with the level of accuracy required for application in engineering. To avoid this problem, we focused on augmenting panoramic images taken at a fixed position. The goal of this project is to propose a robust and automatic initialization method to calculate the pose of urban omnidirectional panoramas to get a perfect alignment between panoramas and virtual information

    Toward Mutual Information based Automatic Registration of 3D Point Clouds

    No full text
    Abstract — This paper reports a novel mutual information (MI) based algorithm for automatic registration of unstructured 3D point clouds comprised of co-registered 3D lidar and camera imagery. The proposed method provides a robust and principled framework for fusing the complementary information obtained from these two different sensing modalities. High-dimensional features are extracted from a training set of textured point clouds (scans) and hierarchical k-means clustering is used to quantize these features into a set of codewords. Using this codebook, any new scan can be represented as a collection of codewords. Under the correct rigid-body transformation aligning two overlapping scans, the MI between the codewords present in the scans is maximized. We apply a James-Stein-type shrinkage estimator to estimate the true MI from the marginal and joint histograms of the codewords extracted from the scans. Experimental results using scans obtained by a vehicle equipped with a 3D laser scanner and an omnidirectional camera are used to validate the robustness of the proposed algorithm over a wide range of initial conditions. We also show that the proposed method works well with 3D data alone. I
    corecore