9,062 research outputs found

    Extended Object Tracking: Introduction, Overview and Applications

    Full text link
    This article provides an elaborate overview of current research in extended object tracking. We provide a clear definition of the extended object tracking problem and discuss its delimitation to other types of object tracking. Next, different aspects of extended object modelling are extensively discussed. Subsequently, we give a tutorial introduction to two basic and well used extended object tracking approaches - the random matrix approach and the Kalman filter-based approach for star-convex shapes. The next part treats the tracking of multiple extended objects and elaborates how the large number of feasible association hypotheses can be tackled using both Random Finite Set (RFS) and Non-RFS multi-object trackers. The article concludes with a summary of current applications, where four example applications involving camera, X-band radar, light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are highlighted.Comment: 30 pages, 19 figure

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    ์ž์œจ์ฃผํ–‰์„ ์œ„ํ•œ ์ •์ง€ ์žฅ์• ๋ฌผ ๋งต๊ณผ GMFT ์œตํ•ฉ ๊ธฐ๋ฐ˜ ์ด๋™ ๋ฌผ์ฒด ํƒ์ง€ ๋ฐ ์ถ”์ 

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€,2019. 8. ์ด๊ฒฝ์ˆ˜.Based on the high accuracy of LiDAR sensor, detection and tracking of moving objects(DATMO) have been advanced as an important branch of perception for an autonomous vehicle. However, due to crowded road circumstances by various kind of vehicles and geographical features, it is necessary to reduce clustering fail case and decrease the computational burden. To overcome these difficulties, this paper proposed a novel approach by integrating DATMO and mapping algorithm. Since the DATMO and mapping are specialized to estimate moving object and static map respectively, these two algorithms can improve their estimation by using each others output. Whole perception algorithm is reconstructed using feedback loop structure includes DATMO and mapping algorithm. Moreover, mapping algorithm and DATMO are revised to innovative Bayesian rule-based Static Obstacle Map(SOM) and Geometric Model-Free Tracking(GMFT) to use each others output as the measurements of filtering process. The proposed study is evaluated via driving dataset collected by vehicles with RTK DGPS, RT-range and 2D LiDAR. Several typical clustering fail cases that had been observed in existing DATMO approach are reduced and code operation time over the whole perception process is decreased. Especially, estimation of moving vehicles state include position, velocity, and yaw angle show less error with references which are measured by RT-range.๋ผ์ด๋‹ค ์„ผ์„œ์˜ ์ธก์ • ์ •๋ฐ€์„ฑ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜์—ฌ DATMO, ์ฆ‰ ์ด๋™ ๋ฌผ์ฒด ํƒ์ง€ ๋ฐ ์ถ”์ ์€ ์ž์œจ์ฃผํ–‰ ์ธ์ง€ ๋ถ„์•ผ์˜ ๋งค์šฐ ์ค‘์š”ํ•œ ์ฃผ์ œ๋กœ ๋ฐœ์ „๋˜์–ด ์™”๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ ์ฐจ๋Ÿ‰์— ์˜ํ•ด ๋„๋กœ ์ƒํ™ฉ์ด ๋ณต์žกํ•œ ์  ๋ฐ ๋„๋กœ ํŠน์œ ์˜ ๋ณต์žกํ•œ ์ง€ํ˜•์  ํŠน์„ฑ ๋•Œ๋ฌธ์— ํด๋Ÿฌ์Šคํ„ฐ๋ง(Clustering)์˜ ์‹คํŒจ ์‚ฌ๋ก€๊ฐ€ ์ข…์ข… ๋ฐœ์ƒํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ธ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ณ„์‚ฐ ๋ถ€๋‹ด๋„ ์ฆ๊ฐ€ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด ์ด ๋…ผ๋ฌธ์—์„œ๋Š” DATMO ์•Œ๊ณ ๋ฆฌ์ฆ˜๊ณผ ๋งตํ•‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ†ตํ•ฉํ•˜์—ฌ ์ƒˆ๋กœ์šด ์ ‘๊ทผ๋ฒ•์„ ์ œ์‹œํ•˜์˜€๋‹ค. DATMO์™€ ๋งตํ•‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๊ฐ๊ฐ ์ด๋™ ๋ฌผ์ฒด์™€ ์ •์ง€ ๋ฌผ์ฒด์˜ ์ƒํƒœ๋ฅผ ์ถ”์ •ํ•˜๋Š”๋ฐ์— ํŠนํ™”๋˜์–ด์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์„œ๋กœ์˜ ์ถœ๋ ฅ์„ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”์ • ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. ์ „์ฒด ์ธ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ DATMO์™€ ๋งตํ•‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํฌํ•จํ•˜๋Š” ํ”ผ๋“œ๋ฐฑ ๋ฃจํ”„ ๊ตฌ์กฐ๋กœ ์žฌ๊ตฌ์„ฑ๋œ๋‹ค. ๋˜ํ•œ ๋‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๊ฐ๊ฐ Geometric Model-Free Tracking(GMFT)๊ณผ ๋ฒ ์ด์ง€์•ˆ ๋ฃฐ ๊ธฐ๋ฐ˜์˜ ํ˜์‹ ์ ์ธ Static Obstacle Map(SOM)์œผ๋กœ ์ˆ˜์ •๋˜์–ด ์„œ๋กœ์˜ ์ถœ๋ ฅ์„ ํ•„ํ„ฐ๋ง ํ”„๋กœ์„ธ์Šค์˜ ์ธก์ •๊ฐ’์œผ๋กœ ์‚ฌ์šฉํ•œ๋‹ค. ์ด ์—ฐ๊ตฌ์—์„œ ์ œ์‹œํ•œ ํ†ตํ•ฉ ์ธ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ RTK DGPS์™€ RT Range ์žฅ๋น„, ๊ทธ๋ฆฌ๊ณ  2์ฐจ์› LiDAR๋ฅผ ์žฅ์ฐฉํ•œ ์ฐจ๋Ÿ‰์„ ์ด์šฉํ•˜์—ฌ ์ˆ˜์ง‘ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•ด ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ๊ธฐ์กด์˜ DATMO ์—ฐ๊ตฌ์—์„œ ๋ฐœ์ƒํ–ˆ๋˜ ๋ช‡ ๊ฐ€์ง€ ์ผ๋ฐ˜์ ์ธ ํด๋Ÿฌ์Šคํ„ฐ๋ง ์‹คํŒจ ์‚ฌ๋ก€๊ฐ€ ๊ฐ์†Œํ•˜์˜€๊ณ  ์ „์ฒด ํ†ตํ•ฉ ์ธ์ง€ ๊ณผ์ •์— ๋Œ€ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์ž‘๋™ ์‹œ๊ฐ„์ด ๊ฐ์†Œํ•จ์„ ํ™•์ธํ•˜์˜€๋‹ค. ํŠนํžˆ, ์ด๋™ํ•˜๋Š” ๋ฌผ์ฒด์˜ ์œ„์น˜, ์†๋„, ๋ฐฉํ–ฅ์„ ์ถ”์ •ํ•œ ๊ฒฐ๊ณผ๋Š” RT Range ์žฅ๋น„๋กœ ์ธก์ •ํ•œ ์‹ค์ œ ๊ฐ’๊ณผ ๊ธฐ์กด ๋ฐฉ์‹ ๋Œ€๋น„ ๋”์šฑ ์ ์€ ์˜ค์ฐจ๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค.Chapter 1 Introduction 1 Chapter 2 Interaction of Mapping and DATMO 5 Chapter 3 Mapping โ€“ Static Obstacle Map 9 3.1 Prediction of SOM 11 3.2 Measurement update of SOM 14 Chapter 4 DATMO โ€“ Geometric Model-Free Tracking 16 4.1 Prediction of target state 18 4.2 Track management 19 4.3 Measurement update of target state 21 Chapter 5 Experimental Results 23 5.1 Vehicles and sensors configuration 24 5.2 Detection rate of moving object 27 5.3 State estimation accuracy of moving object 31 5.4 Code operation time 34 Chapter 6 Conclusion and Future Work 36 6.1 Conclusion 36 6.2 Future works 37 Bibliography 39 ์ดˆ ๋ก 43Maste

    Robust Motion Segmentation from Pairwise Matches

    Full text link
    In this paper we address a classification problem that has not been considered before, namely motion segmentation given pairwise matches only. Our contribution to this unexplored task is a novel formulation of motion segmentation as a two-step process. First, motion segmentation is performed on image pairs independently. Secondly, we combine independent pairwise segmentation results in a robust way into the final globally consistent segmentation. Our approach is inspired by the success of averaging methods. We demonstrate in simulated as well as in real experiments that our method is very effective in reducing the errors in the pairwise motion segmentation and can cope with large number of mismatches

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure
    • โ€ฆ
    corecore