9,062 research outputs found
Extended Object Tracking: Introduction, Overview and Applications
This article provides an elaborate overview of current research in extended
object tracking. We provide a clear definition of the extended object tracking
problem and discuss its delimitation to other types of object tracking. Next,
different aspects of extended object modelling are extensively discussed.
Subsequently, we give a tutorial introduction to two basic and well used
extended object tracking approaches - the random matrix approach and the Kalman
filter-based approach for star-convex shapes. The next part treats the tracking
of multiple extended objects and elaborates how the large number of feasible
association hypotheses can be tackled using both Random Finite Set (RFS) and
Non-RFS multi-object trackers. The article concludes with a summary of current
applications, where four example applications involving camera, X-band radar,
light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are
highlighted.Comment: 30 pages, 19 figure
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
์์จ์ฃผํ์ ์ํ ์ ์ง ์ฅ์ ๋ฌผ ๋งต๊ณผ GMFT ์ตํฉ ๊ธฐ๋ฐ ์ด๋ ๋ฌผ์ฒด ํ์ง ๋ฐ ์ถ์
ํ์๋
ผ๋ฌธ(์์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :๊ณต๊ณผ๋ํ ๊ธฐ๊ณํญ๊ณต๊ณตํ๋ถ,2019. 8. ์ด๊ฒฝ์.Based on the high accuracy of LiDAR sensor, detection and tracking of moving objects(DATMO) have been advanced as an important branch of perception for an autonomous vehicle. However, due to crowded road circumstances by various kind of vehicles and geographical features, it is necessary to reduce clustering fail case and decrease the computational burden. To overcome these difficulties, this paper proposed a novel approach by integrating DATMO and mapping algorithm. Since the DATMO and mapping are specialized to estimate moving object and static map respectively, these two algorithms can improve their estimation by using each others output. Whole perception algorithm is reconstructed using feedback loop structure includes DATMO and mapping algorithm. Moreover, mapping algorithm and DATMO are revised to innovative Bayesian rule-based Static Obstacle Map(SOM) and Geometric Model-Free Tracking(GMFT) to use each others output as the measurements of filtering process. The proposed study is evaluated via driving dataset collected by vehicles with RTK DGPS, RT-range and 2D LiDAR. Several typical clustering fail cases that had been observed in existing DATMO approach are reduced and code operation time over the whole perception process is decreased. Especially, estimation of moving vehicles state include position, velocity, and yaw angle show less error with references which are measured by RT-range.๋ผ์ด๋ค ์ผ์์ ์ธก์ ์ ๋ฐ์ฑ์ ๊ธฐ๋ฐ์ผ๋ก ํ์ฌ DATMO, ์ฆ ์ด๋ ๋ฌผ์ฒด ํ์ง ๋ฐ ์ถ์ ์ ์์จ์ฃผํ ์ธ์ง ๋ถ์ผ์ ๋งค์ฐ ์ค์ํ ์ฃผ์ ๋ก ๋ฐ์ ๋์ด ์๋ค. ๊ทธ๋ฌ๋ ๋ค์ํ ์ข
๋ฅ์ ์ฐจ๋์ ์ํด ๋๋ก ์ํฉ์ด ๋ณต์กํ ์ ๋ฐ ๋๋ก ํน์ ์ ๋ณต์กํ ์งํ์ ํน์ฑ ๋๋ฌธ์ ํด๋ฌ์คํฐ๋ง(Clustering)์ ์คํจ ์ฌ๋ก๊ฐ ์ข
์ข
๋ฐ์ํ ๋ฟ๋ง ์๋๋ผ ์ธ์ง ์๊ณ ๋ฆฌ์ฆ์ ๊ณ์ฐ ๋ถ๋ด๋ ์ฆ๊ฐํ๋ค. ์ด๋ฌํ ๋ฌธ์ ๋ฅผ ๊ทน๋ณตํ๊ธฐ ์ํด ์ด ๋
ผ๋ฌธ์์๋ DATMO ์๊ณ ๋ฆฌ์ฆ๊ณผ ๋งตํ ์๊ณ ๋ฆฌ์ฆ์ ํตํฉํ์ฌ ์๋ก์ด ์ ๊ทผ๋ฒ์ ์ ์ํ์๋ค. DATMO์ ๋งตํ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๊ฐ ์ด๋ ๋ฌผ์ฒด์ ์ ์ง ๋ฌผ์ฒด์ ์ํ๋ฅผ ์ถ์ ํ๋๋ฐ์ ํนํ๋์ด์๊ธฐ ๋๋ฌธ์ ๋ ์๊ณ ๋ฆฌ์ฆ์ ์๋ก์ ์ถ๋ ฅ์ ์
๋ ฅ์ผ๋ก ์ฌ์ฉํ์ฌ ์ถ์ ์ฑ๋ฅ์ ํฅ์์ํฌ ์ ์๋ค. ์ ์ฒด ์ธ์ง ์๊ณ ๋ฆฌ์ฆ์ DATMO์ ๋งตํ ์๊ณ ๋ฆฌ์ฆ์ ํฌํจํ๋ ํผ๋๋ฐฑ ๋ฃจํ ๊ตฌ์กฐ๋ก ์ฌ๊ตฌ์ฑ๋๋ค. ๋ํ ๋ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๊ฐ Geometric Model-Free Tracking(GMFT)๊ณผ ๋ฒ ์ด์ง์ ๋ฃฐ ๊ธฐ๋ฐ์ ํ์ ์ ์ธ Static Obstacle Map(SOM)์ผ๋ก ์์ ๋์ด ์๋ก์ ์ถ๋ ฅ์ ํํฐ๋ง ํ๋ก์ธ์ค์ ์ธก์ ๊ฐ์ผ๋ก ์ฌ์ฉํ๋ค. ์ด ์ฐ๊ตฌ์์ ์ ์ํ ํตํฉ ์ธ์ง ์๊ณ ๋ฆฌ์ฆ์ RTK DGPS์ RT Range ์ฅ๋น, ๊ทธ๋ฆฌ๊ณ 2์ฐจ์ LiDAR๋ฅผ ์ฅ์ฐฉํ ์ฐจ๋์ ์ด์ฉํ์ฌ ์์งํ ๋ฐ์ดํฐ๋ฅผ ํตํด ์ฑ๋ฅ์ ํ๊ฐํ์๋ค. ๊ธฐ์กด์ DATMO ์ฐ๊ตฌ์์ ๋ฐ์ํ๋ ๋ช ๊ฐ์ง ์ผ๋ฐ์ ์ธ ํด๋ฌ์คํฐ๋ง ์คํจ ์ฌ๋ก๊ฐ ๊ฐ์ํ์๊ณ ์ ์ฒด ํตํฉ ์ธ์ง ๊ณผ์ ์ ๋ํ ์๊ณ ๋ฆฌ์ฆ ์๋ ์๊ฐ์ด ๊ฐ์ํจ์ ํ์ธํ์๋ค. ํนํ, ์ด๋ํ๋ ๋ฌผ์ฒด์ ์์น, ์๋, ๋ฐฉํฅ์ ์ถ์ ํ ๊ฒฐ๊ณผ๋ RT Range ์ฅ๋น๋ก ์ธก์ ํ ์ค์ ๊ฐ๊ณผ ๊ธฐ์กด ๋ฐฉ์ ๋๋น ๋์ฑ ์ ์ ์ค์ฐจ๋ฅผ ๋ณด์ฌ์ฃผ์๋ค.Chapter 1 Introduction 1
Chapter 2 Interaction of Mapping and DATMO 5
Chapter 3 Mapping โ Static Obstacle Map 9
3.1 Prediction of SOM 11
3.2 Measurement update of SOM 14
Chapter 4 DATMO โ Geometric Model-Free Tracking 16
4.1 Prediction of target state 18
4.2 Track management 19
4.3 Measurement update of target state 21
Chapter 5 Experimental Results 23
5.1 Vehicles and sensors configuration 24
5.2 Detection rate of moving object 27
5.3 State estimation accuracy of moving object 31
5.4 Code operation time 34
Chapter 6 Conclusion and Future Work 36
6.1 Conclusion 36
6.2 Future works 37
Bibliography 39
์ด ๋ก 43Maste
Robust Motion Segmentation from Pairwise Matches
In this paper we address a classification problem that has not been
considered before, namely motion segmentation given pairwise matches only. Our
contribution to this unexplored task is a novel formulation of motion
segmentation as a two-step process. First, motion segmentation is performed on
image pairs independently. Secondly, we combine independent pairwise
segmentation results in a robust way into the final globally consistent
segmentation. Our approach is inspired by the success of averaging methods. We
demonstrate in simulated as well as in real experiments that our method is very
effective in reducing the errors in the pairwise motion segmentation and can
cope with large number of mismatches
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
- โฆ