184 research outputs found
CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark
A long-term visual object tracking performance evaluation methodology and a
benchmark are proposed. Performance measures are designed by following a
long-term tracking definition to maximize the analysis probing strength. The
new measures outperform existing ones in interpretation potential and in better
distinguishing between different tracking behaviors. We show that these
measures generalize the short-term performance measures, thus linking the two
tracking problems. Furthermore, the new measures are highly robust to temporal
annotation sparsity and allow annotation of sequences hundreds of times longer
than in the current datasets without increasing manual annotation labor. A new
challenging dataset of carefully selected sequences with many target
disappearances is proposed. A new tracking taxonomy is proposed to position
trackers on the short-term/long-term spectrum. The benchmark contains an
extensive evaluation of the largest number of long-term tackers and comparison
to state-of-the-art short-term trackers. We analyze the influence of tracking
architecture implementations to long-term performance and explore various
re-detection strategies as well as influence of visual model update strategies
to long-term tracking drift. The methodology is integrated in the VOT toolkit
to automate experimental analysis and benchmarking and to facilitate future
development of long-term trackers
Utilization and experimental evaluation of occlusion aware kernel correlation filter tracker using RGB-D
Unlike deep-learning which requires large training datasets, correlation filter-based trackers like Kernelized Correlation Filter (KCF) uses implicit properties of tracked images (circulant matrices) for training in real-time. Despite their practical application in tracking, a need for a better understanding of the fundamentals associated with KCF in terms of theoretically, mathematically, and experimentally exists. This thesis first details the workings prototype of the tracker and investigates its effectiveness in real-time applications and supporting visualizations. We further address some of the drawbacks of the tracker in cases of occlusions, scale changes, object rotation, out-of-view and model drift with our novel RGB-D Kernel Correlation tracker. We also study the use of particle filter to improve trackers\u27 accuracy. Our results are experimentally evaluated using a) standard dataset and b) real-time using Microsoft Kinect V2 sensor. We believe this work will set the basis for better understanding the effectiveness of kernel-based correlation filter trackers and to further define some of its possible advantages in tracking
Object Tracking by Reconstruction with View-Specific Discriminative Correlation Filters
Standard RGB-D trackers treat the target as an inherently 2D structure, which
makes modelling appearance changes related even to simple out-of-plane rotation
highly challenging. We address this limitation by proposing a novel long-term
RGB-D tracker - Object Tracking by Reconstruction (OTR). The tracker performs
online 3D target reconstruction to facilitate robust learning of a set of
view-specific discriminative correlation filters (DCFs). The 3D reconstruction
supports two performance-enhancing features: (i) generation of accurate spatial
support for constrained DCF learning from its 2D projection and (ii) point
cloud based estimation of 3D pose change for selection and storage of
view-specific DCFs which are used to robustly localize the target after
out-of-view rotation or heavy occlusion. Extensive evaluation of OTR on the
challenging Princeton RGB-D tracking and STC Benchmarks shows it outperforms
the state-of-the-art by a large margin
Novel Aggregated Solutions for Robust Visual Tracking in Traffic Scenarios
This work proposes novel approaches for object tracking in challenging scenarios like severe occlusion, deteriorated vision and long range multi-object reidentification. All these solutions are only based on image sequence captured by a monocular camera and do not require additional sensors. Experiments on standard benchmarks demonstrate an improved state-of-the-art performance of these approaches. Since all the presented approaches are smartly designed, they can run at a real-time speed
Self-correcting Bayesian target tracking
The copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the authorAbstract
Visual tracking, a building block for many applications, has challenges such as occlusions,illumination changes, background clutter and variable motion dynamics that may degrade the
tracking performance and are likely to cause failures. In this thesis, we propose Track-Evaluate-Correct framework (self-correlation) for existing trackers in order to achieve a robust tracking.
For a tracker in the framework, we embed an evaluation block to check the status of tracking quality and a correction block to avoid upcoming failures or to recover from failures. We present a generic representation and formulation of the self-correcting tracking for Bayesian trackers using a Dynamic Bayesian Network (DBN). The self-correcting tracking is done similarly to a selfaware
system where parameters are tuned in the model or different models are fused or selected in a piece-wise way in order to deal with tracking challenges and failures. In the DBN model
representation, the parameter tuning, fusion and model selection are done based on evaluation and correction variables that correspond to the evaluation and correction, respectively. The inferences
of variables in the DBN model are used to explain the operation of self-correcting tracking. The specific contributions under the generic self-correcting framework are correlation-based selfcorrecting
tracking for an extended object with model points and tracker-level fusion as described below.
For improving the probabilistic tracking of extended object with a set of model points, we use Track-Evaluate-Correct framework in order to achieve self-correcting tracking. The framework
combines the tracker with an on-line performance measure and a correction technique. We correlate model point trajectories to improve on-line the accuracy of a failed or an uncertain tracker. A model point tracker gets assistance from neighbouring trackers whenever degradation in its
performance is detected using the on-line performance measure. The correction of the model point state is based on the correlation information from the states of other trackers. Partial Least
Square regression is used to model the correlation of point tracker states from short windowed trajectories adaptively. Experimental results on data obtained from optical motion capture systems show the improvement in tracking performance of the proposed framework compared to the
baseline tracker and other state-of-the-art trackers. The proposed framework allows appropriate re-initialisation of local trackers to recover from failures that are caused by clutter and missed
detections in the motion capture data.
Finally, we propose a tracker-level fusion framework to obtain self-correcting tracking. The
fusion framework combines trackers addressing different tracking challenges to improve the
overall performance. As a novelty of the proposed framework, we include an online performance measure to identify the track quality level of each tracker to guide the fusion. The trackers
in the framework assist each other based on appropriate mixing of the prior states. Moreover, the track quality level is used to update the target appearance model. We demonstrate the framework
with two Bayesian trackers on video sequences with various challenges and show its robustness
compared to the independent use of the trackers used in the framework, and also compared to
other state-of-the-art trackers. The appropriate online performance measure based appearance
model update and prior mixing on trackers allows the proposed framework to deal with tracking
challenges
- …