9 research outputs found

    DeepICP: An End-to-End Deep Neural Network for 3D Point Cloud Registration

    Full text link
    We present DeepICP - a novel end-to-end learning-based 3D point cloud registration framework that achieves comparable registration accuracy to prior state-of-the-art geometric methods. Different from other keypoint based methods where a RANSAC procedure is usually needed, we implement the use of various deep neural network structures to establish an end-to-end trainable network. Our keypoint detector is trained through this end-to-end structure and enables the system to avoid the inference of dynamic objects, leverages the help of sufficiently salient features on stationary objects, and as a result, achieves high robustness. Rather than searching the corresponding points among existing points, the key contribution is that we innovatively generate them based on learned matching probabilities among a group of candidates, which can boost the registration accuracy. Our loss function incorporates both the local similarity and the global geometric constraints to ensure all above network designs can converge towards the right direction. We comprehensively validate the effectiveness of our approach using both the KITTI dataset and the Apollo-SouthBay dataset. Results demonstrate that our method achieves comparable or better performance than the state-of-the-art geometry-based methods. Detailed ablation and visualization analysis are included to further illustrate the behavior and insights of our network. The low registration error and high robustness of our method makes it attractive for substantial applications relying on the point cloud registration task.Comment: 10 pages, 6 figures, 3 tables, typos corrected, experimental results updated, accepted by ICCV 201

    Vehicle localization by lidar point correlation improved by change detection

    Get PDF
    LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position) and 0.06° (heading), and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany

    Exploiting Sparse Semantic HD Maps for Self-Driving Vehicle Localization

    Full text link
    In this paper we propose a novel semantic localization algorithm that exploits multiple sensors and has precision on the order of a few centimeters. Our approach does not require detailed knowledge about the appearance of the world, and our maps require orders of magnitude less storage than maps utilized by traditional geometry- and LiDAR intensity-based localizers. This is important as self-driving cars need to operate in large environments. Towards this goal, we formulate the problem in a Bayesian filtering framework, and exploit lanes, traffic signs, as well as vehicle dynamics to localize robustly with respect to a sparse semantic map. We validate the effectiveness of our method on a new highway dataset consisting of 312km of roads. Our experiments show that the proposed approach is able to achieve 0.05m lateral accuracy and 1.12m longitudinal accuracy on average while taking up only 0.3% of the storage required by previous LiDAR intensity-based approaches.Comment: 8 pages, 4 figures, 4 tables, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019

    Lidar scan feature for localization with highly precise 3-D map

    No full text

    Testování nové generace LiDARu

    Get PDF
    The development of autonomous vehicles is heavily dependent on object detection technology. Li- DAR, a remote sensing technology that uses laser beams to measure distances and generate precise 3D representations of objects and their surroundings, plays a critical role in this domain. Object detection is essential for advancing autonomous vehicle technology. This study focuses on utilizing LiDAR data from Next Generation datasets and comparing it with the nuScenes dataset. The main objective is to predict object classes and validate the accuracy of the proposed solution using these two distinct datasets. The test approach employed in this study aims to evaluate the effectiveness of the solution and determine the optimal outcomes through rigorous result evaluation.Vývoj autonomních vozidel je silně závislý na technologii detekce objektů. LiDAR, technologie dálkového průzkumu, která využívá laserové paprsky k měření vzdáleností a generování přesných 3D reprezentací objektů a jejich okolí, hraje v této oblasti klíčovou roli. Detekce objektů je nezbytná pro pokrok v technologii autonomních vozidel. Tato studie se zaměřuje na využití dat LiDAR z datových sad nové generace a jejich porovnání s datovou sadou nuScenes. Hlavním cílem je předpovědět třídy objektů a ověřit přesnost navrhovaného řešení pomocí těchto dvou odlišných datových sad. Testovací přístup použitý v této studii má za cíl vyhodnotit efektivitu řešení a stanovit optimální výsledky prostřednictvím přísného vyhodnocení výsledků.450 - Katedra kybernetiky a biomedicínského inženýrstvídobř

    モービルマッピングシステムと航空測量を用いた都市空間高精度3次元モデリング

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 瀬崎 薫, 東京大学教授 江崎 浩, 東京大学教授 苗村 健, 東京大学教授 柴崎 亮介, 東京大学准教授 上條 俊介, 国際電気通信基礎技術研究所 浅見 徹University of Tokyo(東京大学
    corecore