16 research outputs found
Invariance Matters: Exemplar Memory for Domain Adaptive Person Re-identification
This paper considers the domain adaptive person re-identification (re-ID)
problem: learning a re-ID model from a labeled source domain and an unlabeled
target domain. Conventional methods are mainly to reduce feature distribution
gap between the source and target domains. However, these studies largely
neglect the intra-domain variations in the target domain, which contain
critical factors influencing the testing performance on the target domain. In
this work, we comprehensively investigate into the intra-domain variations of
the target domain and propose to generalize the re-ID model w.r.t three types
of the underlying invariance, i.e., exemplar-invariance, camera-invariance and
neighborhood-invariance. To achieve this goal, an exemplar memory is introduced
to store features of the target domain and accommodate the three invariance
properties. The memory allows us to enforce the invariance constraints over
global training batch without significantly increasing computation cost.
Experiment demonstrates that the three invariance properties and the proposed
memory are indispensable towards an effective domain adaptation system. Results
on three re-ID domains show that our domain adaptation accuracy outperforms the
state of the art by a large margin. Code is available at:
https://github.com/zhunzhong07/ECNComment: To appear in CVPR 201
Temporal Continuity Based Unsupervised Learning for Person Re-Identification
Person re-identification (re-id) aims to match the same person from images
taken across multiple cameras. Most existing person re-id methods generally
require a large amount of identity labeled data to act as discriminative
guideline for representation learning. Difficulty in manually collecting
identity labeled data leads to poor adaptability in practical scenarios. To
overcome this problem, we propose an unsupervised center-based clustering
approach capable of progressively learning and exploiting the underlying re-id
discriminative information from temporal continuity within a camera. We call
our framework Temporal Continuity based Unsupervised Learning (TCUL).
Specifically, TCUL simultaneously does center based clustering of unlabeled
(target) dataset and fine-tunes a convolutional neural network (CNN)
pre-trained on irrelevant labeled (source) dataset to enhance discriminative
capability of the CNN for the target dataset. Furthermore, it exploits
temporally continuous nature of images within-camera jointly with spatial
similarity of feature maps across-cameras to generate reliable pseudo-labels
for training a re-identification model. As the training progresses, number of
reliable samples keep on growing adaptively which in turn boosts representation
ability of the CNN. Extensive experiments on three large-scale person re-id
benchmark datasets are conducted to compare our framework with state-of-the-art
techniques, which demonstrate superiority of TCUL over existing methods
Camera-aware Proxies for Unsupervised Person Re-Identification
This paper tackles the purely unsupervised person re-identification (Re-ID)
problem that requires no annotations. Some previous methods adopt clustering
techniques to generate pseudo labels and use the produced labels to train Re-ID
models progressively. These methods are relatively simple but effective.
However, most clustering-based methods take each cluster as a pseudo identity
class, neglecting the large intra-ID variance caused mainly by the change of
camera views. To address this issue, we propose to split each single cluster
into multiple proxies and each proxy represents the instances coming from the
same camera. These camera-aware proxies enable us to deal with large intra-ID
variance and generate more reliable pseudo labels for learning. Based on the
camera-aware proxies, we design both intra- and inter-camera contrastive
learning components for our Re-ID model to effectively learn the ID
discrimination ability within and across cameras. Meanwhile, a proxy-balanced
sampling strategy is also designed, which facilitates our learning further.
Extensive experiments on three large-scale Re-ID datasets show that our
proposed approach outperforms most unsupervised methods by a significant
margin. Especially, on the challenging MSMT17 dataset, we gain Rank-1
and mAP improvements when compared to the second place. Code is
available at: \texttt{https://github.com/Terminator8758/CAP-master}.Comment: Accepted to AAAI 2021. Code is available at:
https://github.com/Terminator8758/CAP-maste