1,371 research outputs found
Beyond Intra-modality: A Survey of Heterogeneous Person Re-identification
An efficient and effective person re-identification (ReID) system relieves
the users from painful and boring video watching and accelerates the process of
video analysis. Recently, with the explosive demands of practical applications,
a lot of research efforts have been dedicated to heterogeneous person
re-identification (Hetero-ReID). In this paper, we provide a comprehensive
review of state-of-the-art Hetero-ReID methods that address the challenge of
inter-modality discrepancies. According to the application scenario, we
classify the methods into four categories -- low-resolution, infrared, sketch,
and text. We begin with an introduction of ReID, and make a comparison between
Homogeneous ReID (Homo-ReID) and Hetero-ReID tasks. Then, we describe and
compare existing datasets for performing evaluations, and survey the models
that have been widely employed in Hetero-ReID. We also summarize and compare
the representative approaches from two perspectives, i.e., the application
scenario and the learning pipeline. We conclude by a discussion of some future
research directions. Follow-up updates are avaible at:
https://github.com/lightChaserX/Awesome-Hetero-reIDComment: Accepted by IJCAI 2020. Project url:
https://github.com/lightChaserX/Awesome-Hetero-reI
Visible-Infrared Person Re-Identification Using Privileged Intermediate Information
Visible-infrared person re-identification (ReID) aims to recognize a same
person of interest across a network of RGB and IR cameras. Some deep learning
(DL) models have directly incorporated both modalities to discriminate persons
in a joint representation space. However, this cross-modal ReID problem remains
challenging due to the large domain shift in data distributions between RGB and
IR modalities. % This paper introduces a novel approach for a creating
intermediate virtual domain that acts as bridges between the two main domains
(i.e., RGB and IR modalities) during training. This intermediate domain is
considered as privileged information (PI) that is unavailable at test time, and
allows formulating this cross-modal matching task as a problem in learning
under privileged information (LUPI). We devised a new method to generate images
between visible and infrared domains that provide additional information to
train a deep ReID model through an intermediate domain adaptation. In
particular, by employing color-free and multi-step triplet loss objectives
during training, our method provides common feature representation spaces that
are robust to large visible-infrared domain shifts. % Experimental results on
challenging visible-infrared ReID datasets indicate that our proposed approach
consistently improves matching accuracy, without any computational overhead at
test time. The code is available at:
\href{https://github.com/alehdaghi/Cross-Modal-Re-ID-via-LUPI}{https://github.com/alehdaghi/Cross-Modal-Re-ID-via-LUPI
Learning Modal-Invariant and Temporal-Memory for Video-based Visible-Infrared Person Re-Identification
Thanks for the cross-modal retrieval techniques, visible-infrared (RGB-IR)
person re-identification (Re-ID) is achieved by projecting them into a common
space, allowing person Re-ID in 24-hour surveillance systems. However, with
respect to the probe-to-gallery, almost all existing RGB-IR based cross-modal
person Re-ID methods focus on image-to-image matching, while the video-to-video
matching which contains much richer spatial- and temporal-information remains
under-explored. In this paper, we primarily study the video-based cross-modal
person Re-ID method. To achieve this task, a video-based RGB-IR dataset is
constructed, in which 927 valid identities with 463,259 frames and 21,863
tracklets captured by 12 RGB/IR cameras are collected. Based on our constructed
dataset, we prove that with the increase of frames in a tracklet, the
performance does meet more enhancement, demonstrating the significance of
video-to-video matching in RGB-IR person Re-ID. Additionally, a novel method is
further proposed, which not only projects two modalities to a modal-invariant
subspace, but also extracts the temporal-memory for motion-invariant. Thanks to
these two strategies, much better results are achieved on our video-based
cross-modal person Re-ID. The code and dataset are released at:
https://github.com/VCMproject233/MITML
- …