199,018 research outputs found
A Multi-task Deep Network for Person Re-identification
Person re-identification (ReID) focuses on identifying people across
different scenes in video surveillance, which is usually formulated as a binary
classification task or a ranking task in current person ReID approaches. In
this paper, we take both tasks into account and propose a multi-task deep
network (MTDnet) that makes use of their own advantages and jointly optimize
the two tasks simultaneously for person ReID. To the best of our knowledge, we
are the first to integrate both tasks in one network to solve the person ReID.
We show that our proposed architecture significantly boosts the performance.
Furthermore, deep architecture in general requires a sufficient dataset for
training, which is usually not met in person ReID. To cope with this situation,
we further extend the MTDnet and propose a cross-domain architecture that is
capable of using an auxiliary set to assist training on small target sets. In
the experiments, our approach outperforms most of existing person ReID
algorithms on representative datasets including CUHK03, CUHK01, VIPeR, iLIDS
and PRID2011, which clearly demonstrates the effectiveness of the proposed
approach.Comment: Accepted by AAAI201
Occluded Person Re-identification
Person re-identification (re-id) suffers from a serious occlusion problem
when applied to crowded public places. In this paper, we propose to retrieve a
full-body person image by using a person image with occlusions. This differs
significantly from the conventional person re-id problem where it is assumed
that person images are detected without any occlusion. We thus call this new
problem the occluded person re-identitification. To address this new problem,
we propose a novel Attention Framework of Person Body (AFPB) based on deep
learning, consisting of 1) an Occlusion Simulator (OS) which automatically
generates artificial occlusions for full-body person images, and 2) multi-task
losses that force the neural network not only to discriminate a person's
identity but also to determine whether a sample is from the occluded data
distribution or the full-body data distribution. Experiments on a new occluded
person re-id dataset and three existing benchmarks modified to include
full-body person images and occluded person images show the superiority of the
proposed method.Comment: 6 pages, 7 figures, IEEE International Conference of Multimedia and
Expo 201
Re-identification and semantic retrieval of pedestrians in video surveillance scenarios
Person re-identification consists of recognizing individuals across different sensors of a camera
network. Whereas clothing appearance cues are widely used, other modalities could
be exploited as additional information sources, like anthropometric measures and gait. In
this work we investigate whether the re-identification accuracy of clothing appearance descriptors
can be improved by fusing them with anthropometric measures extracted from
depth data, using RGB-Dsensors, in unconstrained settings. We also propose a dissimilaritybased
framework for building and fusing multi-modal descriptors of pedestrian images for
re-identification tasks, as an alternative to the widely used score-level fusion. The experimental
evaluation is carried out on two data sets including RGB-D data, one of which is a
novel, publicly available data set that we acquired using Kinect sensors.
In this dissertation we also consider a related task, named semantic retrieval of pedestrians
in video surveillance scenarios, which consists of searching images of individuals using
a textual description of clothing appearance as a query, given by a Boolean combination of
predefined attributes. This can be useful in applications like forensic video analysis, where
the query can be obtained froma eyewitness report. We propose a general method for implementing
semantic retrieval as an extension of a given re-identification system that uses any
multiple part-multiple component appearance descriptor. Additionally, we investigate on
deep learning techniques to improve both the accuracy of attribute detectors and generalization
capabilities. Finally, we experimentally evaluate our methods on several benchmark
datasets originally built for re-identification task
Deep Representation Learning for Vehicle Re-Identification
With the widespread use of surveillance cameras in cities and on motorways, computer vision based intelligent systems are becoming a standard in the industry. Vehicle related problems such as Automatic License Plate Recognition have been addressed by computer vision systems, albeit in controlled settings (e.g.cameras installed at toll gates). Due to the freely available research data becoming available in the last few years, surveillance footage analysis for vehicle related problems are being studied with a computer vision focus. In this thesis, vision-based approaches for the problem of vehicle re-identification are investigated and original approaches are presented for various challenges of the problem. Computer vision based systems have advanced considerably in the last decade due to rapid improvements in machine learning with the advent of deep learning and convolutional neural networks (CNNs). At the core of the paradigm shift that has arrived with deep learning in machine learning is feature learning by multiple stacked neural network layers. Compared to traditional machine learning methods that utilise hand-crafted feature extraction and shallow model learning, deep neural networks can learn hierarchical feature representations as input data transform from low-level to high-level representation through consecutive neural network layers. Furthermore, machine learning tasks are trained in an end-to-end fashion that integrates feature extraction and machine learning methods into a combined framework using neural networks. This thesis focuses on visual feature learning with deep convolutional neural networks for the vehicle re-identification problem. The problem of re-identification has attracted attention from the computer vision community, especially for the person re-identification domain, whereas vehicle re-identification is relatively understudied. Re-identification is the problem of matching identities of subjects in images. The images come from non-overlapping viewing angles captured at varying locations, illuminations, etc. Compared to person re-identification, vehicle reidentification is particularly challenging as vehicles are manufactured to have the same visual appearance and shape that makes different instances visually indistinguishable. This thesis investigates solutions for the aforementioned challenges and makes the following contributions, improving accuracy and robustness of recent approaches. The contributions are the following: (1) Exploring the man-made nature of vehicles, that is, their hierarchical categories such as type (e.g.sedan, SUV) and model (e.g.Audi-2011-A4) and its usefulness in identity matching when identity pairwise labelling is not present (2) A new vehicle re-identification benchmark, Vehicle Re-Identification in Context (VRIC), is introduced to enable the design and evaluation of vehicle re-id methods to more closely reflect real-world application conditions compared to existing benchmarks. VRIC is uniquely characterised by unconstrained vehicle images in low resolution; from wide field of view traffic scene videos exhibiting variations of illumination, motion blur,and occlusion. (3) We evaluate the advantages of Multi-Scale Visual Representation (MSVR) in multi-scale cross-camera matching performance by training a multi-branch CNN model for vehicle re-identification enabled by the availability of low resolution images in VRIC. Experimental results indicate that this approach is useful in real-world settings where image resolution is low and varying across cameras. (4) With Multi-Task Mutual Learning (MTML) we propose a multi-modal learning representation e.g.using orientation as well as identity labels in training. We utilise deep convolutional neural networks with multiple branches to facilitate the learning of multi-modal and multi-scale deep features that increase re-identification performance, as well as orientation invariant feature learning
Visible-Infrared Person Re-Identification Using Privileged Intermediate Information
Visible-infrared person re-identification (ReID) aims to recognize a same
person of interest across a network of RGB and IR cameras. Some deep learning
(DL) models have directly incorporated both modalities to discriminate persons
in a joint representation space. However, this cross-modal ReID problem remains
challenging due to the large domain shift in data distributions between RGB and
IR modalities. % This paper introduces a novel approach for a creating
intermediate virtual domain that acts as bridges between the two main domains
(i.e., RGB and IR modalities) during training. This intermediate domain is
considered as privileged information (PI) that is unavailable at test time, and
allows formulating this cross-modal matching task as a problem in learning
under privileged information (LUPI). We devised a new method to generate images
between visible and infrared domains that provide additional information to
train a deep ReID model through an intermediate domain adaptation. In
particular, by employing color-free and multi-step triplet loss objectives
during training, our method provides common feature representation spaces that
are robust to large visible-infrared domain shifts. % Experimental results on
challenging visible-infrared ReID datasets indicate that our proposed approach
consistently improves matching accuracy, without any computational overhead at
test time. The code is available at:
\href{https://github.com/alehdaghi/Cross-Modal-Re-ID-via-LUPI}{https://github.com/alehdaghi/Cross-Modal-Re-ID-via-LUPI
- …