1,477 research outputs found
Multi-scale Deep Learning Architectures for Person Re-identification
Person Re-identification (re-id) aims to match people across non-overlapping
camera views in a public space. It is a challenging problem because many people
captured in surveillance videos wear similar clothes. Consequently, the
differences in their appearance are often subtle and only detectable at the
right location and scales. Existing re-id models, particularly the recently
proposed deep learning based ones match people at a single scale. In contrast,
in this paper, a novel multi-scale deep learning model is proposed. Our model
is able to learn deep discriminative feature representations at different
scales and automatically determine the most suitable scales for matching. The
importance of different spatial locations for extracting discriminative
features is also learned explicitly. Experiments are carried out to demonstrate
that the proposed model outperforms the state-of-the art on a number of
benchmarksComment: 9 pages, 3 figures, accepted by ICCV 201
Person Re-identification with Correspondence Structure Learning
This paper addresses the problem of handling spatial misalignments due to
camera-view changes or human-pose variations in person re-identification. We
first introduce a boosting-based approach to learn a correspondence structure
which indicates the patch-wise matching probabilities between images from a
target camera pair. The learned correspondence structure can not only capture
the spatial correspondence pattern between cameras but also handle the
viewpoint or human-pose variation in individual images. We further introduce a
global-based matching process. It integrates a global matching constraint over
the learned correspondence structure to exclude cross-view misalignments during
the image patch matching process, hence achieving a more reliable matching
score between images. Experimental results on various datasets demonstrate the
effectiveness of our approach
Occluded Person Re-identification
Person re-identification (re-id) suffers from a serious occlusion problem
when applied to crowded public places. In this paper, we propose to retrieve a
full-body person image by using a person image with occlusions. This differs
significantly from the conventional person re-id problem where it is assumed
that person images are detected without any occlusion. We thus call this new
problem the occluded person re-identitification. To address this new problem,
we propose a novel Attention Framework of Person Body (AFPB) based on deep
learning, consisting of 1) an Occlusion Simulator (OS) which automatically
generates artificial occlusions for full-body person images, and 2) multi-task
losses that force the neural network not only to discriminate a person's
identity but also to determine whether a sample is from the occluded data
distribution or the full-body data distribution. Experiments on a new occluded
person re-id dataset and three existing benchmarks modified to include
full-body person images and occluded person images show the superiority of the
proposed method.Comment: 6 pages, 7 figures, IEEE International Conference of Multimedia and
Expo 201
- …