180 research outputs found
Pedestrian Models for Autonomous Driving Part I: Low-Level Models, from Sensing to Tracking
Abstract—Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control
MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking
Standardized benchmarks have been crucial in pushing the performance of
computer vision algorithms, especially since the advent of deep learning.
Although leaderboards should not be over-claimed, they often provide the most
objective measure of performance and are therefore important guides for
research. We present MOTChallenge, a benchmark for single-camera Multiple
Object Tracking (MOT) launched in late 2014, to collect existing and new data,
and create a framework for the standardized evaluation of multiple object
tracking methods. The benchmark is focused on multiple people tracking, since
pedestrians are by far the most studied object in the tracking community, with
applications ranging from robot navigation to self-driving cars. This paper
collects the first three releases of the benchmark: (i) MOT15, along with
numerous state-of-the-art results that were submitted in the last years, (ii)
MOT16, which contains new challenging videos, and (iii) MOT17, that extends
MOT16 sequences with more precise labels and evaluates tracking performance on
three different object detectors. The second and third release not only offers
a significant increase in the number of labeled boxes but also provide labels
for multiple object classes beside pedestrians, as well as the level of
visibility for every single object of interest. We finally provide a
categorization of state-of-the-art trackers and a broad error analysis. This
will help newcomers understand the related work and research trends in the MOT
community, and hopefully shed some light on potential future research
directions.Comment: Accepted at IJC
Joint Detection and Tracking in Videos with Identification Features
Recent works have shown that combining object detection and tracking tasks,
in the case of video data, results in higher performance for both tasks, but
they require a high frame-rate as a strict requirement for performance. This is
assumption is often violated in real-world applications, when models run on
embedded devices, often at only a few frames per second.
Videos at low frame-rate suffer from large object displacements. Here
re-identification features may support to match large-displaced object
detections, but current joint detection and re-identification formulations
degrade the detector performance, as these two are contrasting tasks. In the
real-world application having separate detector and re-id models is often not
feasible, as both the memory and runtime effectively double.
Towards robust long-term tracking applicable to reduced-computational-power
devices, we propose the first joint optimization of detection, tracking and
re-identification features for videos. Notably, our joint optimization
maintains the detector performance, a typical multi-task challenge. At
inference time, we leverage detections for tracking (tracking-by-detection)
when the objects are visible, detectable and slowly moving in the image. We
leverage instead re-identification features to match objects which disappeared
(e.g. due to occlusion) for several frames or were not tracked due to fast
motion (or low-frame-rate videos). Our proposed method reaches the
state-of-the-art on MOT, it ranks 1st in the UA-DETRAC'18 tracking challenge
among online trackers, and 3rd overall.Comment: Accepted at Image and Vision Computing Journa
Novel data association methods for online multiple human tracking
PhD ThesisVideo-based multiple human tracking has played a crucial role in many applications
such as intelligent video surveillance, human behavior analysis, and
health-care systems. The detection based tracking framework has become
the dominant paradigm in this research eld, and the major task is to accurately
perform the data association between detections across the frames.
However, online multiple human tracking, which merely relies on the detections
given up to the present time for the data association, becomes more
challenging with noisy detections, missed detections, and occlusions. To
address these challenging problems, there are three novel data association
methods for online multiple human tracking are presented in this thesis,
which are online group-structured dictionary learning, enhanced detection
reliability and multi-level cooperative fusion.
The rst proposed method aims to address the noisy detections and
occlusions. In this method, sequential Monte Carlo probability hypothesis
density (SMC-PHD) ltering is the core element for accomplishing the
tracking task, where the measurements are produced by the detection based
tracking framework. To enhance the measurement model, a novel adaptive
gating strategy is developed to aid the classi cation of measurements. In
addition, online group-structured dictionary learning with a maximum voting
method is proposed to estimate robustly the target birth intensity. It
enables the new-born targets in the tracking process to be accurately initialized
from noisy sensor measurements. To improve the adaptability of the
group-structured dictionary to target appearance changes, the simultaneous
codeword optimization (SimCO) algorithm is employed for the dictionary
update.
The second proposed method relates to accurate measurement selection
of detections, which is further to re ne the noisy detections prior to the tracking
pipeline. In order to achieve more reliable measurements in the Gaussian
mixture (GM)-PHD ltering process, a global-to-local enhanced con dence
rescoring strategy is proposed by exploiting the classi cation power of a mask
region-convolutional neural network (R-CNN). Then, an improved pruning
algorithm namely soft-aggregated non-maximal suppression (Soft-ANMS) is
devised to further enhance the selection step. In addition, to avoid the misuse
of ambiguous measurements in the tracking process, person re-identi cation
(ReID) features driven by convolutional neural networks (CNNs) are integrated
to model the target appearances.
The third proposed method focuses on addressing the issues of missed
detections and occlusions. This method integrates two human detectors
with di erent characteristics (full-body and body-parts) in the GM-PHD
lter, and investigates their complementary bene ts for tracking multiple
targets. For each detector domain, a novel discriminative correlation matching
(DCM) model for integration in the feature-level fusion is proposed, and
together with spatio-temporal information is used to reduce the ambiguous
identity associations in the GM-PHD lter. Moreover, a robust fusion
center is proposed within the decision-level fusion to mitigate the sensitivity
of missed detections in the fusion process, thereby improving the fusion
performance and tracking consistency.
The e ectiveness of these proposed methods are investigated using the
MOTChallenge benchmark, which is a framework for the standardized evaluation
of multiple object tracking methods. Detailed evaluations on challenging
video datasets, as well as comparisons with recent state-of-the-art
techniques, con rm the improved multiple human tracking performance
Tracking by Prediction: A Deep Generative Model for Mutli-Person localisation and Tracking
Current multi-person localisation and tracking systems have an over reliance
on the use of appearance models for target re-identification and almost no
approaches employ a complete deep learning solution for both objectives. We
present a novel, complete deep learning framework for multi-person localisation
and tracking. In this context we first introduce a light weight sequential
Generative Adversarial Network architecture for person localisation, which
overcomes issues related to occlusions and noisy detections, typically found in
a multi person environment. In the proposed tracking framework we build upon
recent advances in pedestrian trajectory prediction approaches and propose a
novel data association scheme based on predicted trajectories. This removes the
need for computationally expensive person re-identification systems based on
appearance features and generates human like trajectories with minimal
fragmentation. The proposed method is evaluated on multiple public benchmarks
including both static and dynamic cameras and is capable of generating
outstanding performance, especially among other recently proposed deep neural
network based approaches.Comment: To appear in IEEE Winter Conference on Applications of Computer
Vision (WACV), 201
Single to multiple target, multiple type visual tracking
Visual tracking is a key task in applications such as intelligent surveillance, humancomputer interaction (HCI), human-robot interaction (HRI), augmented reality (AR), driver assistance systems, and medical applications. In this thesis, we make three main novel contributions for target tracking in video sequences.
First, we develop a long-term model-free single target tracking by learning discriminative correlation filters and an online classifier that can track a target of interest in both sparse and crowded scenes. In this case, we learn two different correlation filters, translation and scale correlation filters, using different visual features. We also include a re-detection module that can re-initialize the tracker in case of tracking failures due to long-term occlusions.
Second, a multiple target, multiple type filtering algorithm is developed using Random Finite Set (RFS) theory. In particular, we extend the standard Probability Hypothesis Density (PHD) filter for multiple type of targets, each with distinct detection properties, to develop multiple target, multiple type filtering, N-type PHD filter, where N ≥ 2, for handling confusions that can occur among target types at the measurements level. This method takes into account not only background false positives (clutter), but also confusions between target detections, which are in general different in character from background clutter. Then, under the assumptions of Gaussianity and linearity, we extend Gaussian mixture (GM) implementation of the standard PHD filter for the proposed N-type PHD filter termed as N-type GM-PHD filter.
Third, we apply this N-type GM-PHD filter to real video sequences by integrating object detectors’ information into this filter for two scenarios. In the first scenario, a tri-GM-PHD filter is applied to real video sequences containing three types of multiple targets in the same scene, two football teams and a referee, using separate but confused detections. In the second scenario, we use a dual GM-PHD filter for tracking pedestrians and vehicles in the same scene handling their detectors’ confusions. For both cases, Munkres’s variant of the Hungarian assignment algorithm is used to associate tracked target identities between frames.
We make extensive evaluations of these developed algorithms and find out that our methods outperform their corresponding state-of-the-art approaches by a large margin.EPSR
- …