28,475 research outputs found
Good Features to Correlate for Visual Tracking
During the recent years, correlation filters have shown dominant and
spectacular results for visual object tracking. The types of the features that
are employed in these family of trackers significantly affect the performance
of visual tracking. The ultimate goal is to utilize robust features invariant
to any kind of appearance change of the object, while predicting the object
location as properly as in the case of no appearance change. As the deep
learning based methods have emerged, the study of learning features for
specific tasks has accelerated. For instance, discriminative visual tracking
methods based on deep architectures have been studied with promising
performance. Nevertheless, correlation filter based (CFB) trackers confine
themselves to use the pre-trained networks which are trained for object
classification problem. To this end, in this manuscript the problem of learning
deep fully convolutional features for the CFB visual tracking is formulated. In
order to learn the proposed model, a novel and efficient backpropagation
algorithm is presented based on the loss function of the network. The proposed
learning framework enables the network model to be flexible for a custom
design. Moreover, it alleviates the dependency on the network trained for
classification. Extensive performance analysis shows the efficacy of the
proposed custom design in the CFB tracking framework. By fine-tuning the
convolutional parts of a state-of-the-art network and integrating this model to
a CFB tracker, which is the top performing one of VOT2016, 18% increase is
achieved in terms of expected average overlap, and tracking failures are
decreased by 25%, while maintaining the superiority over the state-of-the-art
methods in OTB-2013 and OTB-2015 tracking datasets.Comment: Accepted version of IEEE Transactions on Image Processin
Visual object tracking performance measures revisited
The problem of visual tracking evaluation is sporting a large variety of
performance measures, and largely suffers from lack of consensus about which
measures should be used in experiments. This makes the cross-paper tracker
comparison difficult. Furthermore, as some measures may be less effective than
others, the tracking results may be skewed or biased towards particular
tracking aspects. In this paper we revisit the popular performance measures and
tracker performance visualizations and analyze them theoretically and
experimentally. We show that several measures are equivalent from the point of
information they provide for tracker comparison and, crucially, that some are
more brittle than the others. Based on our analysis we narrow down the set of
potential measures to only two complementary ones, describing accuracy and
robustness, thus pushing towards homogenization of the tracker evaluation
methodology. These two measures can be intuitively interpreted and visualized
and have been employed by the recent Visual Object Tracking (VOT) challenges as
the foundation for the evaluation methodology
Beyond standard benchmarks: Parameterizing performance evaluation in visual object tracking
Object-to-camera motion produces a variety of apparent motion patterns that
significantly affect performance of short-term visual trackers. Despite being
crucial for designing robust trackers, their influence is poorly explored in
standard benchmarks due to weakly defined, biased and overlapping attribute
annotations. In this paper we propose to go beyond pre-recorded benchmarks with
post-hoc annotations by presenting an approach that utilizes omnidirectional
videos to generate realistic, consistently annotated, short-term tracking
scenarios with exactly parameterized motion patterns. We have created an
evaluation system, constructed a fully annotated dataset of omnidirectional
videos and the generators for typical motion patterns. We provide an in-depth
analysis of major tracking paradigms which is complementary to the standard
benchmarks and confirms the expressiveness of our evaluation approach
Understanding and Diagnosing Visual Tracking Systems
Several benchmark datasets for visual tracking research have been proposed in
recent years. Despite their usefulness, whether they are sufficient for
understanding and diagnosing the strengths and weaknesses of different trackers
remains questionable. To address this issue, we propose a framework by breaking
a tracker down into five constituent parts, namely, motion model, feature
extractor, observation model, model updater, and ensemble post-processor. We
then conduct ablative experiments on each component to study how it affects the
overall result. Surprisingly, our findings are discrepant with some common
beliefs in the visual tracking research community. We find that the feature
extractor plays the most important role in a tracker. On the other hand,
although the observation model is the focus of many studies, we find that it
often brings no significant improvement. Moreover, the motion model and model
updater contain many details that could affect the result. Also, the ensemble
post-processor can improve the result substantially when the constituent
trackers have high diversity. Based on our findings, we put together some very
elementary building blocks to give a basic tracker which is competitive in
performance to the state-of-the-art trackers. We believe our framework can
provide a solid baseline when conducting controlled experiments for visual
tracking research
- …