Robust feature representation plays significant role in visual tracking.
However, it remains a challenging issue, since many factors may affect the
experimental performance. The existing method which combine different features
by setting them equally with the fixed weight could hardly solve the issues,
due to the different statistical properties of different features across
various of scenarios and attributes. In this paper, by exploiting the internal
relationship among these features, we develop a robust method to construct a
more stable feature representation. More specifically, we utilize a co-training
paradigm to formulate the intrinsic complementary information of multi-feature
template into the efficient correlation filter framework. We test our approach
on challenging se- quences with illumination variation, scale variation,
deformation etc. Experimental results demonstrate that the proposed method
outperforms state-of-the-art methods favorably.Comment: 4 pages, ICIP 201