1,335 research outputs found
Simple Online and Realtime Tracking with a Deep Association Metric
Simple Online and Realtime Tracking (SORT) is a pragmatic approach to
multiple object tracking with a focus on simple, effective algorithms. In this
paper, we integrate appearance information to improve the performance of SORT.
Due to this extension we are able to track objects through longer periods of
occlusions, effectively reducing the number of identity switches. In spirit of
the original framework we place much of the computational complexity into an
offline pre-training stage where we learn a deep association metric on a
large-scale person re-identification dataset. During online application, we
establish measurement-to-track associations using nearest neighbor queries in
visual appearance space. Experimental evaluation shows that our extensions
reduce the number of identity switches by 45%, achieving overall competitive
performance at high frame rates.Comment: 5 pages, 1 figur
Next Generation Multitarget Trackers: Random Finite Set Methods vs Transformer-based Deep Learning
Multitarget Tracking (MTT) is the problem of tracking the states of an unknown number of objects using noisy measurements, with important applications to autonomous driving, surveillance, robotics, and others. In the model-based Bayesian setting, there are conjugate priors that enable us to express the multi-object posterior in closed form, which could theoretically provide Bayes-optimal estimates. However, the posterior involves a super-exponential growth of the number of hypotheses over time, forcing state-of-the-art methods to resort to approximations for remaining tractable, which can impact their performance in complex scenarios. Model-free methods based on deep-learning provide an attractive alternative, as they can, in principle, learn the optimal filter from data, but to the best of our knowledge were never compared to current state-of-the-art Bayesian filters, specially not in contexts where accurate models are available. In this paper, we propose a high-performing deeplearning method for MTT based on the Transformer architecture and compare it to two state-of-the-art Bayesian filters, in a setting where we assume the correct model is provided. Although this gives an edge to the model-based filters, it also allows us to generate unlimited training data. We show that the proposed model outperforms state-of-the-art Bayesian filters in complex scenarios, while matching their performance in simpler cases, which validates the applicability of deep-learning also in the model-based regime. The code for all our implementations is made available at https://github.com/JulianoLagana/MT3
- …