34,649 research outputs found
Adaptive visual tracking via multiple appearance models and multiple linear searches
This research is concerned with adaptive, probabilistic single target tracking algorithms. Though visual tracking methods have seen significant improvement, sustained ability to capture appearance changes and precisely locate the target during complex and unexpected motion remains an open problem. Three novel tracking mechanisms are proposed to address these challenges.
The first is a Particle Filter based Markov Chain Monte Carlo method with sampled appearances (MCMC-SA). This adapts to changes in target appearance by combining two popular generative models: templates and histograms, maintaining multiple instances of each in an appearance pool. The proposed tracker automatically switches between models in response to variations in target appearance, exploiting the strengths of each model component. New models are added, automatically, as necessary.
The second is a Particle Filter based Markov Chain Monte Carlo method with motion direction sampling, from which are derived two variations: motion sampling using a fixed direction of the centroid of all features detected (FMCMC-C) and motion sampling using kernel density estimation of direction (FMCMC-S). This utilises sparse estimates of motion direction derived from local features detected from the target. The tracker captures complex target motions efficiently using only simple components.
The third tracking algorithm considered here combines these above methods to improve target localisation. This tracker comprises multiple motion and appearance models (FMCMC-MM) and automatically selects an appropriate motion and appearance model for tracking. The effectiveness of all three tracking algorithms is demonstrated using a variety of challenging video sequences. Results show that these methods considerably improve tracking performance when compared with state of the art appearance-based tracking frameworks
Particle detection and tracking in fluorescence time-lapse imaging: a contrario approach
This paper proposes a probabilistic approach for the detection and the
tracking of particles in fluorescent time-lapse imaging. In the presence of a
very noised and poor-quality data, particles and trajectories can be
characterized by an a contrario model, that estimates the probability of
observing the structures of interest in random data. This approach, first
introduced in the modeling of human visual perception and then successfully
applied in many image processing tasks, leads to algorithms that neither
require a previous learning stage, nor a tedious parameter tuning and are very
robust to noise. Comparative evaluations against a well-established baseline
show that the proposed approach outperforms the state of the art.Comment: Published in Journal of Machine Vision and Application
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
A Deep-structured Conditional Random Field Model for Object Silhouette Tracking
In this work, we introduce a deep-structured conditional random field
(DS-CRF) model for the purpose of state-based object silhouette tracking. The
proposed DS-CRF model consists of a series of state layers, where each state
layer spatially characterizes the object silhouette at a particular point in
time. The interactions between adjacent state layers are established by
inter-layer connectivity dynamically determined based on inter-frame optical
flow. By incorporate both spatial and temporal context in a dynamic fashion
within such a deep-structured probabilistic graphical model, the proposed
DS-CRF model allows us to develop a framework that can accurately and
efficiently track object silhouettes that can change greatly over time, as well
as under different situations such as occlusion and multiple targets within the
scene. Experiment results using video surveillance datasets containing
different scenarios such as occlusion and multiple targets showed that the
proposed DS-CRF approach provides strong object silhouette tracking performance
when compared to baseline methods such as mean-shift tracking, as well as
state-of-the-art methods such as context tracking and boosted particle
filtering.Comment: 17 page
- …