2,250 research outputs found

    FLAT2D: Fast localization from approximate transformation into 2D

    Get PDF
    Many autonomous vehicles require precise localization into a prior map in order to support planning and to leverage semantic information within those maps (e.g. that the right lane is a turn-only lane.) A popular approach in automotive systems is to use infrared intensity maps of the ground surface to localize, making them susceptible to failures when the surface is obscured by snow or when the road is repainted. An emerging alternative is to localize based on the 3D structure around the vehicle; these methods are robust to these types of changes, but the maps are costly both in terms of storage and the computational cost of matching. In this paper, we propose a fast method for localizing based on 3D structure around the vehicle using a 2D representation. This representation retains many of the advantages of "full" matching in 3D, but comes with dramatically lower space and computational requirements. We also introduce a variation of Graph-SLAM tailored to support localization, allowing us to make use of graph-based error-recovery techniques in our localization estimate. Finally, we present real-world localization results for both an indoor mobile robotic platform and an autonomous golf cart, demonstrating that autonomous vehicles do not need full 3D matching to accurately localize in the environment

    AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming

    Full text link
    The combination of aerial survey capabilities of Unmanned Aerial Vehicles with targeted intervention abilities of agricultural Unmanned Ground Vehicles can significantly improve the effectiveness of robotic systems applied to precision agriculture. In this context, building and updating a common map of the field is an essential but challenging task. The maps built using robots of different types show differences in size, resolution and scale, the associated geolocation data may be inaccurate and biased, while the repetitiveness of both visual appearance and geometric structures found within agricultural contexts render classical map merging techniques ineffective. In this paper we propose AgriColMap, a novel map registration pipeline that leverages a grid-based multimodal environment representation which includes a vegetation index map and a Digital Surface Model. We cast the data association problem between maps built from UAVs and UGVs as a multimodal, large displacement dense optical flow estimation. The dominant, coherent flows, selected using a voting scheme, are used as point-to-point correspondences to infer a preliminary non-rigid alignment between the maps. A final refinement is then performed, by exploiting only meaningful parts of the registered maps. We evaluate our system using real world data for 3 fields with different crop species. The results show that our method outperforms several state of the art map registration and matching techniques by a large margin, and has a higher tolerance to large initial misalignments. We release an implementation of the proposed approach along with the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201

    Long-Term Urban Vehicle Localization Using Pole Landmarks Extracted from 3-D Lidar Scans

    Full text link
    Due to their ubiquity and long-term stability, pole-like objects are well suited to serve as landmarks for vehicle localization in urban environments. In this work, we present a complete mapping and long-term localization system based on pole landmarks extracted from 3-D lidar data. Our approach features a novel pole detector, a mapping module, and an online localization module, each of which are described in detail, and for which we provide an open-source implementation at www.github.com/acschaefer/polex. In extensive experiments, we demonstrate that our method improves on the state of the art with respect to long-term reliability and accuracy: First, we prove reliability by tasking the system with localizing a mobile robot over the course of 15~months in an urban area based on an initial map, confronting it with constantly varying routes, differing weather conditions, seasonal changes, and construction sites. Second, we show that the proposed approach clearly outperforms a recently published method in terms of accuracy.Comment: 9 page

    ์ž์œจ์ฃผํ–‰์„ ์œ„ํ•œ ์ •์ง€ ์žฅ์• ๋ฌผ ๋งต๊ณผ GMFT ์œตํ•ฉ ๊ธฐ๋ฐ˜ ์ด๋™ ๋ฌผ์ฒด ํƒ์ง€ ๋ฐ ์ถ”์ 

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€,2019. 8. ์ด๊ฒฝ์ˆ˜.Based on the high accuracy of LiDAR sensor, detection and tracking of moving objects(DATMO) have been advanced as an important branch of perception for an autonomous vehicle. However, due to crowded road circumstances by various kind of vehicles and geographical features, it is necessary to reduce clustering fail case and decrease the computational burden. To overcome these difficulties, this paper proposed a novel approach by integrating DATMO and mapping algorithm. Since the DATMO and mapping are specialized to estimate moving object and static map respectively, these two algorithms can improve their estimation by using each others output. Whole perception algorithm is reconstructed using feedback loop structure includes DATMO and mapping algorithm. Moreover, mapping algorithm and DATMO are revised to innovative Bayesian rule-based Static Obstacle Map(SOM) and Geometric Model-Free Tracking(GMFT) to use each others output as the measurements of filtering process. The proposed study is evaluated via driving dataset collected by vehicles with RTK DGPS, RT-range and 2D LiDAR. Several typical clustering fail cases that had been observed in existing DATMO approach are reduced and code operation time over the whole perception process is decreased. Especially, estimation of moving vehicles state include position, velocity, and yaw angle show less error with references which are measured by RT-range.๋ผ์ด๋‹ค ์„ผ์„œ์˜ ์ธก์ • ์ •๋ฐ€์„ฑ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜์—ฌ DATMO, ์ฆ‰ ์ด๋™ ๋ฌผ์ฒด ํƒ์ง€ ๋ฐ ์ถ”์ ์€ ์ž์œจ์ฃผํ–‰ ์ธ์ง€ ๋ถ„์•ผ์˜ ๋งค์šฐ ์ค‘์š”ํ•œ ์ฃผ์ œ๋กœ ๋ฐœ์ „๋˜์–ด ์™”๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ ์ฐจ๋Ÿ‰์— ์˜ํ•ด ๋„๋กœ ์ƒํ™ฉ์ด ๋ณต์žกํ•œ ์  ๋ฐ ๋„๋กœ ํŠน์œ ์˜ ๋ณต์žกํ•œ ์ง€ํ˜•์  ํŠน์„ฑ ๋•Œ๋ฌธ์— ํด๋Ÿฌ์Šคํ„ฐ๋ง(Clustering)์˜ ์‹คํŒจ ์‚ฌ๋ก€๊ฐ€ ์ข…์ข… ๋ฐœ์ƒํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ธ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ณ„์‚ฐ ๋ถ€๋‹ด๋„ ์ฆ๊ฐ€ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด ์ด ๋…ผ๋ฌธ์—์„œ๋Š” DATMO ์•Œ๊ณ ๋ฆฌ์ฆ˜๊ณผ ๋งตํ•‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ†ตํ•ฉํ•˜์—ฌ ์ƒˆ๋กœ์šด ์ ‘๊ทผ๋ฒ•์„ ์ œ์‹œํ•˜์˜€๋‹ค. DATMO์™€ ๋งตํ•‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๊ฐ๊ฐ ์ด๋™ ๋ฌผ์ฒด์™€ ์ •์ง€ ๋ฌผ์ฒด์˜ ์ƒํƒœ๋ฅผ ์ถ”์ •ํ•˜๋Š”๋ฐ์— ํŠนํ™”๋˜์–ด์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์„œ๋กœ์˜ ์ถœ๋ ฅ์„ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”์ • ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. ์ „์ฒด ์ธ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ DATMO์™€ ๋งตํ•‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํฌํ•จํ•˜๋Š” ํ”ผ๋“œ๋ฐฑ ๋ฃจํ”„ ๊ตฌ์กฐ๋กœ ์žฌ๊ตฌ์„ฑ๋œ๋‹ค. ๋˜ํ•œ ๋‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๊ฐ๊ฐ Geometric Model-Free Tracking(GMFT)๊ณผ ๋ฒ ์ด์ง€์•ˆ ๋ฃฐ ๊ธฐ๋ฐ˜์˜ ํ˜์‹ ์ ์ธ Static Obstacle Map(SOM)์œผ๋กœ ์ˆ˜์ •๋˜์–ด ์„œ๋กœ์˜ ์ถœ๋ ฅ์„ ํ•„ํ„ฐ๋ง ํ”„๋กœ์„ธ์Šค์˜ ์ธก์ •๊ฐ’์œผ๋กœ ์‚ฌ์šฉํ•œ๋‹ค. ์ด ์—ฐ๊ตฌ์—์„œ ์ œ์‹œํ•œ ํ†ตํ•ฉ ์ธ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ RTK DGPS์™€ RT Range ์žฅ๋น„, ๊ทธ๋ฆฌ๊ณ  2์ฐจ์› LiDAR๋ฅผ ์žฅ์ฐฉํ•œ ์ฐจ๋Ÿ‰์„ ์ด์šฉํ•˜์—ฌ ์ˆ˜์ง‘ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•ด ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ๊ธฐ์กด์˜ DATMO ์—ฐ๊ตฌ์—์„œ ๋ฐœ์ƒํ–ˆ๋˜ ๋ช‡ ๊ฐ€์ง€ ์ผ๋ฐ˜์ ์ธ ํด๋Ÿฌ์Šคํ„ฐ๋ง ์‹คํŒจ ์‚ฌ๋ก€๊ฐ€ ๊ฐ์†Œํ•˜์˜€๊ณ  ์ „์ฒด ํ†ตํ•ฉ ์ธ์ง€ ๊ณผ์ •์— ๋Œ€ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์ž‘๋™ ์‹œ๊ฐ„์ด ๊ฐ์†Œํ•จ์„ ํ™•์ธํ•˜์˜€๋‹ค. ํŠนํžˆ, ์ด๋™ํ•˜๋Š” ๋ฌผ์ฒด์˜ ์œ„์น˜, ์†๋„, ๋ฐฉํ–ฅ์„ ์ถ”์ •ํ•œ ๊ฒฐ๊ณผ๋Š” RT Range ์žฅ๋น„๋กœ ์ธก์ •ํ•œ ์‹ค์ œ ๊ฐ’๊ณผ ๊ธฐ์กด ๋ฐฉ์‹ ๋Œ€๋น„ ๋”์šฑ ์ ์€ ์˜ค์ฐจ๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค.Chapter 1 Introduction 1 Chapter 2 Interaction of Mapping and DATMO 5 Chapter 3 Mapping โ€“ Static Obstacle Map 9 3.1 Prediction of SOM 11 3.2 Measurement update of SOM 14 Chapter 4 DATMO โ€“ Geometric Model-Free Tracking 16 4.1 Prediction of target state 18 4.2 Track management 19 4.3 Measurement update of target state 21 Chapter 5 Experimental Results 23 5.1 Vehicles and sensors configuration 24 5.2 Detection rate of moving object 27 5.3 State estimation accuracy of moving object 31 5.4 Code operation time 34 Chapter 6 Conclusion and Future Work 36 6.1 Conclusion 36 6.2 Future works 37 Bibliography 39 ์ดˆ ๋ก 43Maste
    • โ€ฆ
    corecore