2,836 research outputs found
Unmanned aerial vehicle video-based target tracking algorithm Using sparse representation
Target tracking based on unmanned aerial vehicle
(UAV) video is a significant technique in intelligent urban
surveillance systems for smart city applications, such as smart
transportation, road traffic monitoring, inspection of stolen
vehicle, etc. In this paper, a vision-based target tracking algorithm
aiming at locating UAV-captured targets, like pedestrian and
vehicle, is proposed using sparse representation theory. First of all,
each target candidate is sparsely represented in the subspace
spanned by a joint dictionary. Then, the sparse representation
coefficient is further constrained by an L2 regularization based on
the temporal consistency. To cope with the partial occlusion
appearing in UAV videos, a Markov Random Field (MRF)-based
binary support vector with contiguous occlusion constraint is
introduced to our sparse representation model. For long-term
tracking, the particle filter framework along with a dynamic
template update scheme is designed. Both qualitative and
quantitative experiments implemented on visible (Vis) and
infrared (IR) UAV videos prove that the presented tracker can
achieve better performances in terms of precision rate and success
rate when compared with other state-of-the-art tracker
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …