2 research outputs found
Marrying Tracking with ELM: A Metric Constraint Guided Multiple Feature Fusion Method
Object Tracking is one important problem in computer vision and surveillance
system. The existing models mainly exploit the single-view feature (i.e. color,
texture, shape) to solve the problem, failing to describe the objects
comprehensively. In this paper, we solve the problem from multi-view
perspective by leveraging multi-view complementary and latent information, so
as to be robust to the partial occlusion and background clutter especially when
the objects are similar to the target, meanwhile addressing tracking drift.
However, one big problem is that multi-view fusion strategy can inevitably
result tracking into non-efficiency. To this end, we propose to marry ELM
(Extreme learning machine) to multi-view fusion to train the global hidden
output weight, to effectively exploit the local information from each view.
Following this principle, we propose a novel method to obtain the optimal
sample as the target object, which avoids tracking drift resulting from noisy
samples. Our method is evaluated over 12 challenge image sequences challenged
with different attributes including illumination, occlusion, deformation, etc.,
which demonstrates better performance than several state-of-the-art methods in
terms of effectiveness and robustness.Comment: arXiv admin note: substantial text overlap with arXiv:1807.1021
Robust Tracking via Weighted Online Extreme Learning Machine
The tracking method based on the extreme learning machine (ELM) is efficient
and effective. ELM randomly generates input weights and biases in the hidden
layer, and then calculates and computes the output weights by reducing the
iterative solution to the problem of linear equations. Therefore, ELM offers
the satisfying classification performance and fast training time than other
discriminative models in tracking. However, the original ELM method often
suffers from the problem of the imbalanced classification distribution, which
is caused by few target objects, leading to under-fitting and more background
samples leading to over-fitting. Worse still, it reduces the robustness of
tracking under special conditions including occlusion, illumination, etc. To
address above problems, in this paper, we present a robust tracking algorithm.
First, we introduce the local weight matrix that is the dynamic creation from
the data distribution at the current frame in the original ELM so as to balance
between the empirical and structure risk, and fully learn the target object to
enhance the classification performance. Second, we improve it to the
incremental learning method ensuring tracking real-time and efficient. Finally,
the forgetting factor is used to strengthen the robustness for changing of the
classification distribution with time. Meanwhile, we propose a novel optimized
method to obtain the optimal sample as the target object, which avoids tracking
drift resulting from noisy samples. Therefore, our tracking method can fully
learn both of the target object and background information to enhance the
tracking performance, and it is evaluated in 20 challenge image sequences with
different attributes including illumination, occlusion, deformation, etc.,
which achieves better performance than several state-of-the-art methods in
terms of effectiveness and robustness