6,780 research outputs found
Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting
For person re-identification, existing deep networks often focus on
representation learning. However, without transfer learning, the learned model
is fixed as is, which is not adaptable for handling various unseen scenarios.
In this paper, beyond representation learning, we consider how to formulate
person image matching directly in deep feature maps. We treat image matching as
finding local correspondences in feature maps, and construct query-adaptive
convolution kernels on the fly to achieve local matching. In this way, the
matching process and results are interpretable, and this explicit matching is
more generalizable than representation features to unseen scenarios, such as
unknown misalignments, pose or viewpoint changes. To facilitate end-to-end
training of this architecture, we further build a class memory module to cache
feature maps of the most recent samples of each class, so as to compute image
matching losses for metric learning. Through direct cross-dataset evaluation,
the proposed Query-Adaptive Convolution (QAConv) method gains large
improvements over popular learning methods (about 10%+ mAP), and achieves
comparable results to many transfer learning methods. Besides, a model-free
temporal cooccurrence based score weighting method called TLift is proposed,
which improves the performance to a further extent, achieving state-of-the-art
results in cross-dataset person re-identification. Code is available at
https://github.com/ShengcaiLiao/QAConv.Comment: This is the ECCV 2020 version, including the appendi
CANU-ReID: A Conditional Adversarial Network for Unsupervised person Re-IDentification
Unsupervised person re-ID is the task of identifying people on a target data
set for which the ID labels are unavailable during training. In this paper, we
propose to unify two trends in unsupervised person re-ID: clustering &
fine-tuning and adversarial learning. On one side, clustering groups training
images into pseudo-ID labels, and uses them to fine-tune the feature extractor.
On the other side, adversarial learning is used, inspired by domain adaptation,
to match distributions from different domains. Since target data is distributed
across different camera viewpoints, we propose to model each camera as an
independent domain, and aim to learn domain-independent features.
Straightforward adversarial learning yields negative transfer, we thus
introduce a conditioning vector to mitigate this undesirable effect. In our
framework, the centroid of the cluster to which the visual sample belongs is
used as conditioning vector of our conditional adversarial network, where the
vector is permutation invariant (clusters ordering does not matter) and its
size is independent of the number of clusters. To our knowledge, we are the
first to propose the use of conditional adversarial networks for unsupervised
person re-ID. We evaluate the proposed architecture on top of two
state-of-the-art clustering-based unsupervised person re-identification (re-ID)
methods on four different experimental settings with three different data sets
and set the new state-of-the-art performance on all four of them. Our code and
model will be made publicly available at
https://team.inria.fr/perception/canu-reid/
- …