10 research outputs found

    Change of Scenery: Unsupervised LiDAR Change Detection for Mobile Robots

    Full text link
    This paper presents a fully unsupervised deep change detection approach for mobile robots with 3D LiDAR. In unstructured environments, it is infeasible to define a closed set of semantic classes. Instead, semantic segmentation is reformulated as binary change detection. We develop a neural network, RangeNetCD, that uses an existing point-cloud map and a live LiDAR scan to detect scene changes with respect to the map. Using a novel loss function, existing point-cloud semantic segmentation networks can be trained to perform change detection without any labels or assumptions about local semantics. We demonstrate the performance of this approach on data from challenging terrains; mean intersection over union (mIoU) scores range between 67.4% and 82.2% depending on the amount of environmental structure. This outperforms the geometric baseline used in all experiments. The neural network runs faster than 10Hz and is integrated into a robot's autonomy stack to allow safe navigation around obstacles that intersect the planned path. In addition, a novel method for the rapid automated acquisition of per-point ground-truth labels is described. Covering changed parts of the scene with retroreflective materials and applying a threshold filter to the intensity channel of the LiDAR allows for quantitative evaluation of the change detector.Comment: 7 pages (6 content, 1 references). 7 figures, submitted to the 2024 IEEE International Conference on Robotics and Automation (ICRA

    RH-Map: Online Map Construction Framework of Dynamic Objects Removal Based on Region-wise Hash Map Structure

    Full text link
    Mobile robots navigating in outdoor environments frequently encounter the issue of undesired traces left by dynamic objects and manifested as obstacles on map, impeding robots from achieving accurate localization and effective navigation. To tackle the problem, a novel map construction framework based on 3D region-wise hash map structure (RH-Map) is proposed, consisting of front-end scan fresher and back-end removal modules, which realizes real-time map construction and online dynamic object removal (DOR). First, a two-layer 3D region-wise hash map structure of map management is proposed for effective online DOR. Then, in scan fresher, region-wise ground plane estimation (R-GPE) is adopted for estimating and preserving ground information and Scan-to-Map Removal (S2M-R) is proposed to discriminate and remove dynamic regions. Moreover, the lightweight back-end removal module maintaining keyframes is proposed for further DOR. As experimentally verified on SemanticKITTI, our proposed framework yields promising performance on online DOR of map construction compared with the state-of-the-art methods. And we also validate the proposed framework in real-world environments

    Dynablox: Real-time Detection of Diverse Dynamic Objects in Complex Environments

    Full text link
    Real-time detection of moving objects is an essential capability for robots acting autonomously in dynamic environments. We thus propose Dynablox, a novel online mapping-based approach for robust moving object detection in complex environments. The central idea of our approach is to incrementally estimate high confidence free-space areas by modeling and accounting for sensing, state estimation, and mapping limitations during online robot operation. The spatio-temporally conservative free space estimate enables robust detection of moving objects without making any assumptions on the appearance of objects or environments. This allows deployment in complex scenes such as multi-storied buildings or staircases, and for diverse moving objects such as people carrying various items, doors swinging or even balls rolling around. We thoroughly evaluate our approach on real-world data sets, achieving 86% IoU at 17 FPS in typical robotic settings. The method outperforms a recent appearance-based classifier and approaches the performance of offline methods. We demonstrate its generality on a novel data set with rare moving objects in complex environments. We make our efficient implementation and the novel data set available as open-source.Comment: Code released at https://github.com/ethz-asl/dynablo

    S2^2MAT: Simultaneous and Self-Reinforced Mapping and Tracking in Dynamic Urban Scenariosorcing Framework for Simultaneous Mapping and Tracking in Unbounded Urban Environments

    Full text link
    Despite the increasing prevalence of robots in daily life, their navigation capabilities are still limited to environments with prior knowledge, such as a global map. To fully unlock the potential of robots, it is crucial to enable them to navigate in large-scale unknown and changing unstructured scenarios. This requires the robot to construct an accurate static map in real-time as it explores, while filtering out moving objects to ensure mapping accuracy and, if possible, achieving high-quality pedestrian tracking and collision avoidance. While existing methods can achieve individual goals of spatial mapping or dynamic object detection and tracking, there has been limited research on effectively integrating these two tasks, which are actually coupled and reciprocal. In this work, we propose a solution called S2^2MAT (Simultaneous and Self-Reinforced Mapping and Tracking) that integrates a front-end dynamic object detection and tracking module with a back-end static mapping module. S2^2MAT leverages the close and reciprocal interplay between these two modules to efficiently and effectively solve the open problem of simultaneous tracking and mapping in highly dynamic scenarios. We conducted extensive experiments using widely-used datasets and simulations, providing both qualitative and quantitative results to demonstrate S2^2MAT's state-of-the-art performance in dynamic object detection, tracking, and high-quality static structure mapping. Additionally, we performed long-range robotic navigation in real-world urban scenarios spanning over 7 km, which included challenging obstacles like pedestrians and other traffic agents. The successful navigation provides a comprehensive test of S2^2MAT's robustness, scalability, efficiency, quality, and its ability to benefit autonomous robots in wild scenarios without pre-built maps.Comment: homepage: https://sites.google.com/view/smat-na

    ๋„์‹ฌ๋„๋กœ์—์„œ ์ž์œจ์ฃผํ–‰์ฐจ๋Ÿ‰์˜ ๋ผ์ด๋‹ค ๊ธฐ๋ฐ˜ ๊ฐ•๊ฑดํ•œ ์œ„์น˜ ๋ฐ ์ž์„ธ ์ถ”์ •

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„๊ณตํ•™๋ถ€, 2023. 2. ์ด๊ฒฝ์ˆ˜.This paper presents a method for tackling erroneous odometry estimation results from LiDAR-based simultaneous localization and mapping (SLAM) techniques on complex urban roads. Most SLAM techniques estimate sensor odometry through a comparison between measurements from the current and the previous step. As such, a static environment is generally more advantageous for SLAM systems. However, urban environments contain a significant number of dynamic objects, the point clouds of which can noticeably hinder the performance of SLAM systems. As a countermeasure, this paper proposes a 3D LiDAR SLAM system based on static LiDAR point clouds for use in dynamic outdoor urban environments. The proposed method is primarily composed of two parts, moving object detection and pose estimation through 3D LiDAR SLAM. First, moving objects in the vicinity of the ego-vehicle are detected from a referred algorithm based on a geometric model-free approach (GMFA) and a static obstacle map (STOM). GMFA works in conjunction with STOM to estimate the state of moving objects in real-time. The bounding boxes occupied by these moving objects are utilized to remove points corresponding to dynamic objects in the raw LiDAR point clouds. The remaining static points are applied to LiDAR SLAM. The second part of the proposed method describes odometry estimation through referred LiDAR SLAM, LeGO-LOAM. The LeGO-LOAM, a feature-based LiDAR SLAM framework, converts LiDAR point clouds into range images, from which edge and planar points are extracted as features. The range images are further utilized in a preprocessing stage to improve the computation efficiency of the overall algorithm. Additionally, a 6-DOF transformation is utilized, the model equation of which can be obtained by setting a residual to be the distance between an extracted feature of the current step and the corresponding feature geometry of the previous step. The equation is optimized through the Levenberg-Marquardt method. Furthermore, GMFA and LeGO-LOAM operate in parallel to resolve computational delays associated with GMFA. Actual vehicle tests were conducted on urban roads through a test vehicle equipped with a 32-channel 3D LiDAR and a real-time kinematics GPS (RTK GPS). Validations results have shown the proposed method to significantly decrease estimation errors related to moving feature points while securing target output frequency.๋ณธ ์—ฐ๊ตฌ๋Š” ๋ณต์žกํ•œ ๋„์‹ฌ ํ™˜๊ฒฝ์—์„œ ๋ผ์ด๋‹ค ๊ธฐ๋ฐ˜ ๋™์‹œ์  ์œ„์น˜ ์ถ”์ • ๋ฐ ๋งตํ•‘(Simultaneous localization and mapping, SLAM)์˜ ์ด๋™๋Ÿ‰ ์ถ”์ • ์˜ค๋ฅ˜๋ฅผ ๋ฐฉ์ง€ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•œ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ SLAM์€ ์ด์ „ ์Šคํ…๊ณผ ํ˜„์žฌ ์Šคํ…์˜ ์„ผ์„œ ์ธก์ •์น˜๋ฅผ ๋น„๊ตํ•˜์—ฌ ์ž์ฐจ๋Ÿ‰์˜ ์ด๋™๋Ÿ‰์„ ์ถ”์ •ํ•œ๋‹ค. ๋”ฐ๋ผ์„œ SLAM์—๋Š” ์ •์ ์ธ ํ™˜๊ฒฝ์ด ํ•„์ˆ˜์ ์ด๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์„ผ์„œ๋Š” ๋„์‹ฌํ™˜๊ฒฝ์—์„œ ๋™์ ์ธ ๋ฌผ์ฒด์— ์‰ฝ๊ฒŒ ๋…ธ์ถœ๋˜๊ณ  ๋™์  ๋ฌผ์ฒด๋กœ๋ถ€ํ„ฐ ์ถœ๋ ฅ๋˜๋Š” ๋ผ์ด๋‹ค ์ ๊ตฐ๋“ค์€ ์ด๋™๋Ÿ‰ ์ถ”์ • ์„ฑ๋Šฅ์„ ์ €ํ•˜์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. ์ด์—, ๋ณธ ์—ฐ๊ตฌ๋Š” ๋™์ ์ธ ๋„์‹ฌํ™˜๊ฒฝ์—์„œ ์ •์ ์ธ ์ ๊ตฐ์„ ๊ธฐ๋ฐ˜ํ•œ 3์ฐจ์› ๋ผ์ด๋‹ค SLAM ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์€ ์ด๋™ ๋ฌผ์ฒด ์ธ์ง€์™€ 3์ฐจ์› ๋ผ์ด๋‹ค SLAM์„ ํ†ตํ•œ ์œ„์น˜ ๋ฐ ์ž์„ธ ์ถ”์ •์œผ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ์šฐ์„ , ๊ธฐํ•˜ํ•™์  ๋ชจ๋ธ ํ”„๋ฆฌ ์ ‘๊ทผ๋ฒ•๊ณผ ์ •์ง€ ์žฅ์• ๋ฌผ ๋งต์˜ ์ƒํ˜ธ ๋ณด์™„์ ์ธ ๊ด€๊ณ„์— ๊ธฐ๋ฐ˜ํ•œ ์ฐธ๊ณ ๋œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•ด ์ž์ฐจ๋Ÿ‰ ์ฃผ๋ณ€์˜ ์ด๋™ ๋ฌผ์ฒด์˜ ๋™์  ์ƒํƒœ๋ฅผ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ถ”์ •ํ•œ๋‹ค. ๊ทธ ํ›„, ์ถ”์ •๋œ ์ด๋™ ๋ฌผ์ฒด๊ฐ€ ์ฐจ์ง€ํ•˜๋Š” ๊ฒฝ๊ณ„์„ ์„ ์ด์šฉํ•˜์—ฌ ๋™์  ๋ฌผ์ฒด์— ํ•ด๋‹นํ•˜๋Š” ์ ๋“ค์„ ๊ธฐ์กด ๋ผ์ด๋‹ค ์ ๊ตฐ์—์„œ ์ œ๊ฑฐํ•˜๊ณ , ๊ฒฐ๊ณผ๋กœ ์–ป์€ ์ •์ ์ธ ๋ผ์ด๋‹ค ์ ๊ตฐ์€ ๋ผ์ด๋‹ค SLAM์— ์ž…๋ ฅ๋œ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์€ ๋ผ์ด๋‹ค SLAM์„ ํ†ตํ•ด ์ž์ฐจ๋Ÿ‰์˜ ์œ„์น˜ ๋ฐ ์ž์„ธ๋ฅผ ์ถ”์ •ํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋ณธ ์—ฐ๊ตฌ๋Š” ๋ผ์ด๋‹ค SLAM์˜ ํ”„๋ ˆ์ž„์›Œํฌ์ธ LeGO-LOAM์„ ์ฑ„ํƒํ•˜์˜€๋‹ค. ํŠน์ง•์  ๊ธฐ๋ฐ˜ SLAM์ธ LeGO-LOAM์€ ๋ผ์ด๋‹ค ์ ๊ตฐ์„ ๊ฑฐ๋ฆฌ ๊ธฐ๋ฐ˜ ์ด๋ฏธ์ง€๋กœ ๋ณ€ํ™˜์‹œ์ผœ ํŠน์ง•์ ์ธ ๋ชจ์„œ๋ฆฌ ์ ๊ณผ ํ‰๋ฉด ์ ์„ ์ถ”์ถœํ•œ๋‹ค. ๋˜ํ•œ ๊ฑฐ๋ฆฌ ๊ธฐ๋ฐ˜ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•œ ์ „์ฒ˜๋ฆฌ ๊ณผ์ •์„ ํ†ตํ•ด ๊ณ„์‚ฐ ํšจ์œจ์„ ๋†’์ธ๋‹ค. ์ถ”์ถœ๋œ ํ˜„์žฌ ์Šคํ…์˜ ํŠน์ง•์ ๊ณผ ์ด์— ๋Œ€์‘๋˜๋Š” ์ด์ „ ์Šคํ…์˜ ํŠน์ง•์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง„ ๊ธฐํ•˜ํ•™์  ๊ตฌ์กฐ์™€์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์ž”์ฐจ๋กœ ์„ค์ •ํ•˜์—ฌ 6 ์ž์œ ๋„ ๋ณ€ํ™˜์‹์— ๋Œ€ํ•œ ๋ชจ๋ธ ๋ฐฉ์ •์‹์„ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค. ์ฐธ๊ณ ํ•œ LeGO-LOAM์€ ํ•ด๋‹น ๋ฐฉ์ •์‹์„ Levenberg-Marquardt ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ์ตœ์ ํ™”๋ฅผ ์ˆ˜ํ–‰ํ•œ๋‹ค. ๋˜ํ•œ, ๋ณธ ์—ฐ๊ตฌ๋Š” ์ฐธ๊ณ ๋œ ์ธ์ง€ ๋ชจ๋“ˆ์˜ ์ฒ˜๋ฆฌ ์ง€์—ฐ ๋ฌธ์ œ๋ฅผ ๋ณด์™„ํ•˜๊ธฐ ์œ„ํ•ด ์ด๋™ ๋ฌผ์ฒด ์ธ์ง€ ๋ชจ๋“ˆ๊ณผ LeGO-LOAM์˜ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ ๊ตฌ์กฐ๋ฅผ ๊ณ ์•ˆํ•˜์˜€๋‹ค. ์‹คํ—˜์€ ๋„์‹ฌํ™˜๊ฒฝ์—์„œ 32์ฑ„๋„ 3์ฐจ์› ๋ผ์ด๋‹ค์™€ ๊ณ ์ •๋ฐ€ GPS๋ฅผ ์žฅ์ฐฉํ•œ ์‹คํ—˜์ฐจ๋Ÿ‰์œผ๋กœ ์ง„ํ–‰๋˜์—ˆ๋‹ค. ์„ฑ๋Šฅ ๊ฒ€์ฆ ๊ฒฐ๊ณผ, ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์€ ๋ชฉํ‘œ ์ถœ๋ ฅ ์†๋„๋ฅผ ๋ณด์žฅํ•˜๋ฉด์„œ ์›€์ง์ด๋Š” ํŠน์ง•์ ์œผ๋กœ ์ธํ•œ ์ถ”์ • ์˜ค์ฐจ๋ฅผ ์œ ์˜๋ฏธํ•˜๊ฒŒ ์ค„์ผ ์ˆ˜ ์žˆ์—ˆ๋‹ค.Chapter 1. Introduction ๏ผ‘ 1.1. Research Motivation ๏ผ‘ 1.2. Previous Research ๏ผ“ 1.2.1. Moving Object Detection ๏ผ“ 1.2.2. SLAM ๏ผ” 1.3. Thesis Objective and Outline ๏ผ‘๏ผ“ Chapter 2. Methodology ๏ผ‘๏ผ• 2.1. Moving Object Detection & Rejection ๏ผ‘๏ผ• 2.1.1. Static Obstacle Map ๏ผ‘๏ผ• 2.1.2. Geometric Model-Free Approach ๏ผ‘๏ผ˜ 2.2. LiDAR SLAM ๏ผ’๏ผ’ 2.2.1. Segmentation ๏ผ’๏ผ’ 2.2.2. Feature Extraction ๏ผ’๏ผ“ 2.2.3. LiDAR Odometry and Mapping ๏ผ’๏ผ– 2.2.4. LiDAR SLAM with Static Point Cloud ๏ผ’๏ผ˜ Chapter 3. Experiments ๏ผ“๏ผ 3.1. Experimental Setup ๏ผ“๏ผ 3.2. Error Metrics ๏ผ“๏ผ’ 3.3. LiDAR SLAM using Static Point Cloud ๏ผ“๏ผ– Chapter 4. Conclusion ๏ผ”๏ผ” Bibliography ๏ผ”๏ผ•์„

    MotionBEV: Attention-Aware Online LiDAR Moving Object Segmentation with Bird's Eye View based Appearance and Motion Features

    Full text link
    Identifying moving objects is an essential capability for autonomous systems, as it provides critical information for pose estimation, navigation, collision avoidance, and static map construction. In this paper, we present MotionBEV, a fast and accurate framework for LiDAR moving object segmentation, which segments moving objects with appearance and motion features in the bird's eye view (BEV) domain. Our approach converts 3D LiDAR scans into a 2D polar BEV representation to improve computational efficiency. Specifically, we learn appearance features with a simplified PointNet and compute motion features through the height differences of consecutive frames of point clouds projected onto vertical columns in the polar BEV coordinate system. We employ a dual-branch network bridged by the Appearance-Motion Co-attention Module (AMCM) to adaptively fuse the spatio-temporal information from appearance and motion features. Our approach achieves state-of-the-art performance on the SemanticKITTI-MOS benchmark. Furthermore, to demonstrate the practical effectiveness of our method, we provide a LiDAR-MOS dataset recorded by a solid-state LiDAR, which features non-repetitive scanning patterns and a small field of view

    The Peopleremoverโ€”Removing Dynamic Objects From 3-D Point Cloud Data by Traversing a Voxel Occupancy Grid

    No full text
    corecore