1,205 research outputs found
Learning Adaptive Discriminative Correlation Filters via Temporal Consistency Preserving Spatial Feature Selection for Robust Visual Tracking
With efficient appearance learning models, Discriminative Correlation Filter
(DCF) has been proven to be very successful in recent video object tracking
benchmarks and competitions. However, the existing DCF paradigm suffers from
two major issues, i.e., spatial boundary effect and temporal filter
degradation. To mitigate these challenges, we propose a new DCF-based tracking
method. The key innovations of the proposed method include adaptive spatial
feature selection and temporal consistent constraints, with which the new
tracker enables joint spatial-temporal filter learning in a lower dimensional
discriminative manifold. More specifically, we apply structured spatial
sparsity constraints to multi-channel filers. Consequently, the process of
learning spatial filters can be approximated by the lasso regularisation. To
encourage temporal consistency, the filter model is restricted to lie around
its historical value and updated locally to preserve the global structure in
the manifold. Last, a unified optimisation framework is proposed to jointly
select temporal consistency preserving spatial features and learn
discriminative filters with the augmented Lagrangian method. Qualitative and
quantitative evaluations have been conducted on a number of well-known
benchmarking datasets such as OTB2013, OTB50, OTB100, Temple-Colour, UAV123 and
VOT2018. The experimental results demonstrate the superiority of the proposed
method over the state-of-the-art approaches
An Accelerated Correlation Filter Tracker
Recent visual object tracking methods have witnessed a continuous improvement
in the state-of-the-art with the development of efficient discriminative
correlation filters (DCF) and robust deep neural network features. Despite the
outstanding performance achieved by the above combination, existing advanced
trackers suffer from the burden of high computational complexity of the deep
feature extraction and online model learning. We propose an accelerated ADMM
optimisation method obtained by adding a momentum to the optimisation sequence
iterates, and by relaxing the impact of the error between DCF parameters and
their norm. The proposed optimisation method is applied to an innovative
formulation of the DCF design, which seeks the most discriminative spatially
regularised feature channels. A further speed up is achieved by an adaptive
initialisation of the filter optimisation process. The significantly increased
convergence of the DCF filter is demonstrated by establishing the optimisation
process equivalence with a continuous dynamical system for which the
convergence properties can readily be derived. The experimental results
obtained on several well-known benchmarking datasets demonstrate the efficiency
and robustness of the proposed ACFT method, with a tracking accuracy comparable
to the start-of-the-art trackers
Remove Cosine Window from Correlation Filter-based Visual Trackers: When and How
Correlation filters (CFs) have been continuously advancing the
state-of-the-art tracking performance and have been extensively studied in the
recent few years. Most of the existing CF trackers adopt a cosine window to
spatially reweight base image to alleviate boundary discontinuity. However,
cosine window emphasizes more on the central region of base image and has the
risk of contaminating negative training samples during model learning. On the
other hand, spatial regularization deployed in many recent CF trackers plays a
similar role as cosine window by enforcing spatial penalty on CF coefficients.
Therefore, we in this paper investigate the feasibility to remove cosine window
from CF trackers with spatial regularization. When simply removing cosine
window, CF with spatial regularization still suffers from small degree of
boundary discontinuity. To tackle this issue, binary and Gaussian shaped mask
functions are further introduced for eliminating boundary discontinuity while
reweighting the estimation error of each training sample, and can be
incorporated with multiple CF trackers with spatial regularization. In
comparison to the counterparts with cosine window, our methods are effective in
handling boundary discontinuity and sample contamination, thereby benefiting
tracking performance. Extensive experiments on three benchmarks show that our
methods perform favorably against the state-of-the-art trackers using either
handcrafted or deep CNN features. The code is publicly available at
https://github.com/lifeng9472/Removing_cosine_window_from_CF_trackers.Comment: 13 pages, 7 figures, submitted to IEEE Transactions on Image
Processin
UAV object tracking by correlation filter with adaptive appearance model
With the increasing availability of low-cost, commercially available unmanned aerial vehicles (UAVs), visual tracking using UAVs has become more and more important due to its many new applications, including automatic navigation, obstacle avoidance, traffic monitoring, search and rescue, etc. However, real-world aerial tracking poses many challenges due to platform motion and image instability, such as aspect ratio change, viewpoint change, fast motion, scale variation and so on. In this paper, an efficient object tracking method for UAV videos is proposed to tackle these challenges. We construct the fused features to capture the gradient information and color characteristics simultaneously. Furthermore, cellular automata is introduced to update the appearance template of target accurately and sparsely. In particular, a high confidence model updating strategy is developed according to the stability function. Systematic comparative evaluations performed on the popular UAV123 dataset show the efficiency of the proposed approach
- …