3 research outputs found
Extending Multi-Object Tracking systems to better exploit appearance and 3D information
Tracking multiple objects in real time is essential for a variety of
real-world applications, with self-driving industry being at the foremost. This
work involves exploiting temporally varying appearance and motion information
for tracking. Siamese networks have recently become highly successful at
appearance based single object tracking and Recurrent Neural Networks have
started dominating both motion and appearance based tracking. Our work focuses
on combining Siamese networks and RNNs to exploit appearance and motion
information respectively to build a joint system capable of real time
multi-object tracking. We further explore heuristics based constraints for
tracking in the Birds Eye View Space for efficiently exploiting 3D information
as a constrained optimization problem for track prediction.Comment: 7 page
Beyond Background-Aware Correlation Filters: Adaptive Context Modeling by Hand-Crafted and Deep RGB Features for Visual Tracking
In recent years, the background-aware correlation filters have achie-ved a
lot of research interest in the visual target tracking. However, these methods
cannot suitably model the target appearance due to the exploitation of
hand-crafted features. On the other hand, the recent deep learning-based visual
tracking methods have provided a competitive performance along with extensive
computations. In this paper, an adaptive background-aware correlation
filter-based tracker is proposed that effectively models the target appearance
by using either the histogram of oriented gradients (HOG) or convolutional
neural network (CNN) feature maps. The proposed method exploits the fast 2D
non-maximum suppression (NMS) algorithm and the semantic information comparison
to detect challenging situations. When the HOG-based response map is not
reliable, or the context region has a low semantic similarity with prior
regions, the proposed method constructs the CNN context model to improve the
target region estimation. Furthermore, the rejection option allows the proposed
method to update the CNN context model only on valid regions. Comprehensive
experimental results demonstrate that the proposed adaptive method clearly
outperforms the accuracy and robustness of visual target tracking compared to
the state-of-the-art methods on the OTB-50, OTB-100, TC-128, UAV-123, and
VOT-2015 datasets.Comment: To be appeared in Multimedia Tools and Applications, Springer, 202
AAA: Adaptive Aggregation of Arbitrary Online Trackers with Theoretical Performance Guarantee
For visual object tracking, it is difficult to realize an almighty online
tracker due to the huge variations of target appearance depending on an image
sequence. This paper proposes an online tracking method that adaptively
aggregates arbitrary multiple online trackers. The performance of the proposed
method is theoretically guaranteed to be comparable to that of the best tracker
for any image sequence, although the best expert is unknown during tracking.
The experimental study on the large variations of benchmark datasets and
aggregated trackers demonstrates that the proposed method can achieve
state-of-the-art performance. The code is available at
https://github.com/songheony/AAA-journal