12 research outputs found

    Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping

    Full text link
    Modern 3D laser-range scanners have a high data rate, making online simultaneous localization and mapping (SLAM) computationally challenging. Recursive state estimation techniques are efficient but commit to a state estimate immediately after a new scan is made, which may lead to misalignments of measurements. We present a 3D SLAM approach that allows for refining alignments during online mapping. Our method is based on efficient local mapping and a hierarchical optimization back-end. Measurements of a 3D laser scanner are aggregated in local multiresolution maps by means of surfel-based registration. The local maps are used in a multi-level graph for allocentric mapping and localization. In order to incorporate corrections when refining the alignment, the individual 3D scans in the local map are modeled as a sub-graph and graph optimization is performed to account for drift and misalignments in the local maps. Furthermore, in each sub-graph, a continuous-time representation of the sensor trajectory allows to correct measurements between scan poses. We evaluate our approach in multiple experiments by showing qualitative results. Furthermore, we quantify the map quality by an entropy-based measure.Comment: In: Proceedings of the International Conference on Robotics and Automation (ICRA) 201

    Picking Up Speed: Continuous-Time Lidar-Only Odometry using Doppler Velocity Measurements

    Full text link
    Frequency-Modulated Continuous-Wave (FMCW) lidar is a recently emerging technology that additionally enables per-return instantaneous relative radial velocity measurements via the Doppler effect. In this letter, we present the first continuous-time lidar-only odometry algorithm using these Doppler velocity measurements from an FMCW lidar to aid odometry in geometrically degenerate environments. We apply an existing continuous-time framework that efficiently estimates the vehicle trajectory using Gaussian process regression to compensate for motion distortion due to the scanning-while-moving nature of any mechanically actuated lidar (FMCW and non-FMCW). We evaluate our proposed algorithm on several real-world datasets, including publicly available ones and datasets we collected. Our algorithm outperforms the only existing method that also uses Doppler velocity measurements, and we study difficult conditions where including this extra information greatly improves performance. We additionally demonstrate state-of-the-art performance of lidar-only odometry with and without using Doppler velocity measurements in nominal conditions. Code for this project can be found at: https://github.com/utiasASRL/steam_icp.Comment: Submitted to RA-

    LOAM: Lidar Odometry and Mapping in Real-time

    Full text link
    Abstract โ€” We propose a real-time method for odometry and mapping using range measurements from a 2-axis lidar moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation can cause mis-registration of the resulting point cloud. To date, coherent 3D maps can be built by off-line batch methods, often using loop closure to correct for drift over time. Our method achieves both low-drift and low-computational complexity with-out the need for high accuracy ranging or inertial measurements. The key idea in obtaining this level of performance is the division of the complex problem of simultaneous localization and mapping, which seeks to optimize a large number of variables simultaneously, by two algorithms. One algorithm performs odometry at a high frequency but low fidelity to estimate velocity of the lidar. Another algorithm runs at a frequency of an order of magnitude lower for fine matching and registration of the point cloud. Combination of the two algorithms allows the method to map in real-time. The method has been evaluated by a large set of experiments as well as on the KITTI odometry benchmark. The results indicate that the method can achieve accuracy at the level of state of the art offline batch methods. I

    ๋„์‹ฌ๋„๋กœ์—์„œ ์ž์œจ์ฃผํ–‰์ฐจ๋Ÿ‰์˜ ๋ผ์ด๋‹ค ๊ธฐ๋ฐ˜ ๊ฐ•๊ฑดํ•œ ์œ„์น˜ ๋ฐ ์ž์„ธ ์ถ”์ •

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„๊ณตํ•™๋ถ€, 2023. 2. ์ด๊ฒฝ์ˆ˜.This paper presents a method for tackling erroneous odometry estimation results from LiDAR-based simultaneous localization and mapping (SLAM) techniques on complex urban roads. Most SLAM techniques estimate sensor odometry through a comparison between measurements from the current and the previous step. As such, a static environment is generally more advantageous for SLAM systems. However, urban environments contain a significant number of dynamic objects, the point clouds of which can noticeably hinder the performance of SLAM systems. As a countermeasure, this paper proposes a 3D LiDAR SLAM system based on static LiDAR point clouds for use in dynamic outdoor urban environments. The proposed method is primarily composed of two parts, moving object detection and pose estimation through 3D LiDAR SLAM. First, moving objects in the vicinity of the ego-vehicle are detected from a referred algorithm based on a geometric model-free approach (GMFA) and a static obstacle map (STOM). GMFA works in conjunction with STOM to estimate the state of moving objects in real-time. The bounding boxes occupied by these moving objects are utilized to remove points corresponding to dynamic objects in the raw LiDAR point clouds. The remaining static points are applied to LiDAR SLAM. The second part of the proposed method describes odometry estimation through referred LiDAR SLAM, LeGO-LOAM. The LeGO-LOAM, a feature-based LiDAR SLAM framework, converts LiDAR point clouds into range images, from which edge and planar points are extracted as features. The range images are further utilized in a preprocessing stage to improve the computation efficiency of the overall algorithm. Additionally, a 6-DOF transformation is utilized, the model equation of which can be obtained by setting a residual to be the distance between an extracted feature of the current step and the corresponding feature geometry of the previous step. The equation is optimized through the Levenberg-Marquardt method. Furthermore, GMFA and LeGO-LOAM operate in parallel to resolve computational delays associated with GMFA. Actual vehicle tests were conducted on urban roads through a test vehicle equipped with a 32-channel 3D LiDAR and a real-time kinematics GPS (RTK GPS). Validations results have shown the proposed method to significantly decrease estimation errors related to moving feature points while securing target output frequency.๋ณธ ์—ฐ๊ตฌ๋Š” ๋ณต์žกํ•œ ๋„์‹ฌ ํ™˜๊ฒฝ์—์„œ ๋ผ์ด๋‹ค ๊ธฐ๋ฐ˜ ๋™์‹œ์  ์œ„์น˜ ์ถ”์ • ๋ฐ ๋งตํ•‘(Simultaneous localization and mapping, SLAM)์˜ ์ด๋™๋Ÿ‰ ์ถ”์ • ์˜ค๋ฅ˜๋ฅผ ๋ฐฉ์ง€ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•œ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ SLAM์€ ์ด์ „ ์Šคํ…๊ณผ ํ˜„์žฌ ์Šคํ…์˜ ์„ผ์„œ ์ธก์ •์น˜๋ฅผ ๋น„๊ตํ•˜์—ฌ ์ž์ฐจ๋Ÿ‰์˜ ์ด๋™๋Ÿ‰์„ ์ถ”์ •ํ•œ๋‹ค. ๋”ฐ๋ผ์„œ SLAM์—๋Š” ์ •์ ์ธ ํ™˜๊ฒฝ์ด ํ•„์ˆ˜์ ์ด๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์„ผ์„œ๋Š” ๋„์‹ฌํ™˜๊ฒฝ์—์„œ ๋™์ ์ธ ๋ฌผ์ฒด์— ์‰ฝ๊ฒŒ ๋…ธ์ถœ๋˜๊ณ  ๋™์  ๋ฌผ์ฒด๋กœ๋ถ€ํ„ฐ ์ถœ๋ ฅ๋˜๋Š” ๋ผ์ด๋‹ค ์ ๊ตฐ๋“ค์€ ์ด๋™๋Ÿ‰ ์ถ”์ • ์„ฑ๋Šฅ์„ ์ €ํ•˜์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. ์ด์—, ๋ณธ ์—ฐ๊ตฌ๋Š” ๋™์ ์ธ ๋„์‹ฌํ™˜๊ฒฝ์—์„œ ์ •์ ์ธ ์ ๊ตฐ์„ ๊ธฐ๋ฐ˜ํ•œ 3์ฐจ์› ๋ผ์ด๋‹ค SLAM ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์€ ์ด๋™ ๋ฌผ์ฒด ์ธ์ง€์™€ 3์ฐจ์› ๋ผ์ด๋‹ค SLAM์„ ํ†ตํ•œ ์œ„์น˜ ๋ฐ ์ž์„ธ ์ถ”์ •์œผ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ์šฐ์„ , ๊ธฐํ•˜ํ•™์  ๋ชจ๋ธ ํ”„๋ฆฌ ์ ‘๊ทผ๋ฒ•๊ณผ ์ •์ง€ ์žฅ์• ๋ฌผ ๋งต์˜ ์ƒํ˜ธ ๋ณด์™„์ ์ธ ๊ด€๊ณ„์— ๊ธฐ๋ฐ˜ํ•œ ์ฐธ๊ณ ๋œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•ด ์ž์ฐจ๋Ÿ‰ ์ฃผ๋ณ€์˜ ์ด๋™ ๋ฌผ์ฒด์˜ ๋™์  ์ƒํƒœ๋ฅผ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ถ”์ •ํ•œ๋‹ค. ๊ทธ ํ›„, ์ถ”์ •๋œ ์ด๋™ ๋ฌผ์ฒด๊ฐ€ ์ฐจ์ง€ํ•˜๋Š” ๊ฒฝ๊ณ„์„ ์„ ์ด์šฉํ•˜์—ฌ ๋™์  ๋ฌผ์ฒด์— ํ•ด๋‹นํ•˜๋Š” ์ ๋“ค์„ ๊ธฐ์กด ๋ผ์ด๋‹ค ์ ๊ตฐ์—์„œ ์ œ๊ฑฐํ•˜๊ณ , ๊ฒฐ๊ณผ๋กœ ์–ป์€ ์ •์ ์ธ ๋ผ์ด๋‹ค ์ ๊ตฐ์€ ๋ผ์ด๋‹ค SLAM์— ์ž…๋ ฅ๋œ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์€ ๋ผ์ด๋‹ค SLAM์„ ํ†ตํ•ด ์ž์ฐจ๋Ÿ‰์˜ ์œ„์น˜ ๋ฐ ์ž์„ธ๋ฅผ ์ถ”์ •ํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋ณธ ์—ฐ๊ตฌ๋Š” ๋ผ์ด๋‹ค SLAM์˜ ํ”„๋ ˆ์ž„์›Œํฌ์ธ LeGO-LOAM์„ ์ฑ„ํƒํ•˜์˜€๋‹ค. ํŠน์ง•์  ๊ธฐ๋ฐ˜ SLAM์ธ LeGO-LOAM์€ ๋ผ์ด๋‹ค ์ ๊ตฐ์„ ๊ฑฐ๋ฆฌ ๊ธฐ๋ฐ˜ ์ด๋ฏธ์ง€๋กœ ๋ณ€ํ™˜์‹œ์ผœ ํŠน์ง•์ ์ธ ๋ชจ์„œ๋ฆฌ ์ ๊ณผ ํ‰๋ฉด ์ ์„ ์ถ”์ถœํ•œ๋‹ค. ๋˜ํ•œ ๊ฑฐ๋ฆฌ ๊ธฐ๋ฐ˜ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•œ ์ „์ฒ˜๋ฆฌ ๊ณผ์ •์„ ํ†ตํ•ด ๊ณ„์‚ฐ ํšจ์œจ์„ ๋†’์ธ๋‹ค. ์ถ”์ถœ๋œ ํ˜„์žฌ ์Šคํ…์˜ ํŠน์ง•์ ๊ณผ ์ด์— ๋Œ€์‘๋˜๋Š” ์ด์ „ ์Šคํ…์˜ ํŠน์ง•์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง„ ๊ธฐํ•˜ํ•™์  ๊ตฌ์กฐ์™€์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์ž”์ฐจ๋กœ ์„ค์ •ํ•˜์—ฌ 6 ์ž์œ ๋„ ๋ณ€ํ™˜์‹์— ๋Œ€ํ•œ ๋ชจ๋ธ ๋ฐฉ์ •์‹์„ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค. ์ฐธ๊ณ ํ•œ LeGO-LOAM์€ ํ•ด๋‹น ๋ฐฉ์ •์‹์„ Levenberg-Marquardt ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ์ตœ์ ํ™”๋ฅผ ์ˆ˜ํ–‰ํ•œ๋‹ค. ๋˜ํ•œ, ๋ณธ ์—ฐ๊ตฌ๋Š” ์ฐธ๊ณ ๋œ ์ธ์ง€ ๋ชจ๋“ˆ์˜ ์ฒ˜๋ฆฌ ์ง€์—ฐ ๋ฌธ์ œ๋ฅผ ๋ณด์™„ํ•˜๊ธฐ ์œ„ํ•ด ์ด๋™ ๋ฌผ์ฒด ์ธ์ง€ ๋ชจ๋“ˆ๊ณผ LeGO-LOAM์˜ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ ๊ตฌ์กฐ๋ฅผ ๊ณ ์•ˆํ•˜์˜€๋‹ค. ์‹คํ—˜์€ ๋„์‹ฌํ™˜๊ฒฝ์—์„œ 32์ฑ„๋„ 3์ฐจ์› ๋ผ์ด๋‹ค์™€ ๊ณ ์ •๋ฐ€ GPS๋ฅผ ์žฅ์ฐฉํ•œ ์‹คํ—˜์ฐจ๋Ÿ‰์œผ๋กœ ์ง„ํ–‰๋˜์—ˆ๋‹ค. ์„ฑ๋Šฅ ๊ฒ€์ฆ ๊ฒฐ๊ณผ, ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์€ ๋ชฉํ‘œ ์ถœ๋ ฅ ์†๋„๋ฅผ ๋ณด์žฅํ•˜๋ฉด์„œ ์›€์ง์ด๋Š” ํŠน์ง•์ ์œผ๋กœ ์ธํ•œ ์ถ”์ • ์˜ค์ฐจ๋ฅผ ์œ ์˜๋ฏธํ•˜๊ฒŒ ์ค„์ผ ์ˆ˜ ์žˆ์—ˆ๋‹ค.Chapter 1. Introduction ๏ผ‘ 1.1. Research Motivation ๏ผ‘ 1.2. Previous Research ๏ผ“ 1.2.1. Moving Object Detection ๏ผ“ 1.2.2. SLAM ๏ผ” 1.3. Thesis Objective and Outline ๏ผ‘๏ผ“ Chapter 2. Methodology ๏ผ‘๏ผ• 2.1. Moving Object Detection & Rejection ๏ผ‘๏ผ• 2.1.1. Static Obstacle Map ๏ผ‘๏ผ• 2.1.2. Geometric Model-Free Approach ๏ผ‘๏ผ˜ 2.2. LiDAR SLAM ๏ผ’๏ผ’ 2.2.1. Segmentation ๏ผ’๏ผ’ 2.2.2. Feature Extraction ๏ผ’๏ผ“ 2.2.3. LiDAR Odometry and Mapping ๏ผ’๏ผ– 2.2.4. LiDAR SLAM with Static Point Cloud ๏ผ’๏ผ˜ Chapter 3. Experiments ๏ผ“๏ผ 3.1. Experimental Setup ๏ผ“๏ผ 3.2. Error Metrics ๏ผ“๏ผ’ 3.3. LiDAR SLAM using Static Point Cloud ๏ผ“๏ผ– Chapter 4. Conclusion ๏ผ”๏ผ” Bibliography ๏ผ”๏ผ•์„

    Continuous-Time Estimation of Attitude Using B-Splines on Lie Groups

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/140656/1/1.g001149.pd

    External multi-modal imaging sensor calibration for sensor fusion: A review

    Get PDF
    Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, Innovaciรณn y Universidades | Ref. PID2019-108816RB-I0

    Understanding a Dynamic World: Dynamic Motion Estimation for Autonomous Driving Using LIDAR

    Full text link
    In a society that is heavily reliant on personal transportation, autonomous vehicles present an increasingly intriguing technology. They have the potential to save lives, promote efficiency, and enable mobility. However, before this vision becomes a reality, there are a number of challenges that must be solved. One key challenge involves problems in dynamic motion estimation, as it is critical for an autonomous vehicle to have an understanding of the dynamics in its environment for it to operate safely on the road. Accordingly, this thesis presents several algorithms for dynamic motion estimation for autonomous vehicles. We focus on methods using light detection and ranging (LIDAR), a prevalent sensing modality used by autonomous vehicle platforms, due to its advantages over other sensors, such as cameras, including lighting invariance and fidelity of 3D geometric data. First, we propose a dynamic object tracking algorithm. The proposed method takes as input a stream of LIDAR data from a moving object collected by a multi-sensor platform. It generates an estimate of its trajectory over time and a point cloud model of its shape. We formulate the problem similarly to simultaneous localization and mapping (SLAM), allowing us to leverage existing techniques. Unlike prior work, we properly handle a stream of sensor measurements observed over time by deriving our algorithm using a continuous-time estimation framework. We evaluate our proposed method on a real-world dataset that we collect. Second, we present a method for scene flow estimation from a stream of LIDAR data. Inspired by optical flow and scene flow from the computer vision community, our framework can estimate dynamic motion in the scene without relying on segmentation and data association while still rivaling the results of state-of-the-art object tracking methods. We design our algorithms to exploit a graphics processing unit (GPU), enabling real-time performance. Third, we leverage deep learning tools to build a feature learning framework that allows us to train an encoding network to estimate features from a LIDAR occupancy grid. The learned feature space describes the geometric and semantic structure of any location observed by the LIDAR data. We formulate the training process so that distances in this learned feature space are meaningful in comparing the similarity of different locations. Accordingly, we demonstrate that using this feature space improves our estimate of the dynamic motion in the environment over time. In summary, this thesis presents three methods to aid in understanding a dynamic world for autonomous vehicle applications with LIDAR. These methods include a novel object tracking algorithm, a real-time scene flow estimation method, and a feature learning framework to aid in dynamic motion estimation. Furthermore, we demonstrate the performance of all our proposed methods on a collection of real-world datasets.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147587/1/aushani_1.pd
    corecore