238,660 research outputs found
Person Transfer GAN to Bridge Domain Gap for Person Re-Identification
Although the performance of person Re-Identification (ReID) has been
significantly boosted, many challenging issues in real scenarios have not been
fully investigated, e.g., the complex scenes and lighting variations, viewpoint
and pose changes, and the large number of identities in a camera network. To
facilitate the research towards conquering those issues, this paper contributes
a new dataset called MSMT17 with many important features, e.g., 1) the raw
videos are taken by an 15-camera network deployed in both indoor and outdoor
scenes, 2) the videos cover a long period of time and present complex lighting
variations, and 3) it contains currently the largest number of annotated
identities, i.e., 4,101 identities and 126,441 bounding boxes. We also observe
that, domain gap commonly exists between datasets, which essentially causes
severe performance drop when training and testing on different datasets. This
results in that available training data cannot be effectively leveraged for new
testing domains. To relieve the expensive costs of annotating new training
samples, we propose a Person Transfer Generative Adversarial Network (PTGAN) to
bridge the domain gap. Comprehensive experiments show that the domain gap could
be substantially narrowed-down by the PTGAN.Comment: 10 pages, 9 figures; accepted in CVPR 201
CANU-ReID: A Conditional Adversarial Network for Unsupervised person Re-IDentification
Unsupervised person re-ID is the task of identifying people on a target data
set for which the ID labels are unavailable during training. In this paper, we
propose to unify two trends in unsupervised person re-ID: clustering &
fine-tuning and adversarial learning. On one side, clustering groups training
images into pseudo-ID labels, and uses them to fine-tune the feature extractor.
On the other side, adversarial learning is used, inspired by domain adaptation,
to match distributions from different domains. Since target data is distributed
across different camera viewpoints, we propose to model each camera as an
independent domain, and aim to learn domain-independent features.
Straightforward adversarial learning yields negative transfer, we thus
introduce a conditioning vector to mitigate this undesirable effect. In our
framework, the centroid of the cluster to which the visual sample belongs is
used as conditioning vector of our conditional adversarial network, where the
vector is permutation invariant (clusters ordering does not matter) and its
size is independent of the number of clusters. To our knowledge, we are the
first to propose the use of conditional adversarial networks for unsupervised
person re-ID. We evaluate the proposed architecture on top of two
state-of-the-art clustering-based unsupervised person re-identification (re-ID)
methods on four different experimental settings with three different data sets
and set the new state-of-the-art performance on all four of them. Our code and
model will be made publicly available at
https://team.inria.fr/perception/canu-reid/
Cross domain Residual Transfer Learning for Person Re-identification
International audienceThis paper presents a novel way to transfer model weights from one domain to another using residual learning framework instead of direct fine-tuning. It also argues for hybrid models that use learned (deep) features and statistical metric learning for multi-shot person re-identification when training sets are small. This is in contrast to popular end-to-end neural network based models or models that use hand-crafted features with adaptive matching models (neural nets or statistical metrics). Our experiments demonstrate that a hybrid model with residual transfer learning can yield significantly better re-identification performance than an end-to-end model when training set is small. On iLIDS-VID [42] and PRID [15] datasets, we achieve rank-1 recognition rates of 89.8% and 95%, respectively, which is a significant improvement over state-of-the-art
Domain Adaptive Attention Model for Unsupervised Cross-Domain Person Re-Identification
Person re-identification (Re-ID) across multiple datasets is a challenging
yet important task due to the possibly large distinctions between different
datasets and the lack of training samples in practical applications. This work
proposes a novel unsupervised domain adaption framework which transfers
discriminative representations from the labeled source domain (dataset) to the
unlabeled target domain (dataset). We propose to formulate the domain adaption
task as an one-class classification problem with a novel domain similarity
loss. Given the feature map of any image from a backbone network, a novel
domain adaptive attention model (DAAM) first automatically learns to separate
the feature map of an image to a domain-shared feature (DSH) map and a
domain-specific feature (DSP) map simultaneously. Specially, the residual
attention mechanism is designed to model DSP feature map for avoiding negative
transfer. Then, a DSH branch and a DSP branch are introduced to learn DSH and
DSP feature maps respectively. To reduce domain divergence caused by that the
source and target datasets are collected from different environments, we force
to project the DSH feature maps from different domains to a new nominal domain,
and a novel domain similarity loss is proposed based on one-class
classification. In addition, a novel unsupervised person Re-ID loss is proposed
to take full use of unlabeled target data. Extensive experiments on the
Market-1501 and DukeMTMC-reID benchmarks demonstrate state-of-the-art
performance of the proposed method. Code will be released to facilitate further
studies on the cross-domain person re-identification task
Color Prompting for Data-Free Continual Unsupervised Domain Adaptive Person Re-Identification
Unsupervised domain adaptive person re-identification (Re-ID) methods
alleviate the burden of data annotation through generating pseudo supervision
messages. However, real-world Re-ID systems, with continuously accumulating
data streams, simultaneously demand more robust adaptation and anti-forgetting
capabilities. Methods based on image rehearsal addresses the forgetting issue
with limited extra storage but carry the risk of privacy leakage. In this work,
we propose a Color Prompting (CoP) method for data-free continual unsupervised
domain adaptive person Re-ID. Specifically, we employ a light-weighted prompter
network to fit the color distribution of the current task together with Re-ID
training. Then for the incoming new tasks, the learned color distribution
serves as color style transfer guidance to transfer the images into past
styles. CoP achieves accurate color style recovery for past tasks with adequate
data diversity, leading to superior anti-forgetting effects compared with image
rehearsal methods. Moreover, CoP demonstrates strong generalization performance
for fast adaptation into new domains, given only a small amount of unlabeled
images. Extensive experiments demonstrate that after the continual training
pipeline the proposed CoP achieves 6.7% and 8.1% average rank-1 improvements
over the replay method on seen and unseen domains, respectively. The source
code for this work is publicly available in
https://github.com/vimar-gu/ColorPromptReID
- …