2,551 research outputs found
Unsupervised learning of generative topic saliency for person re-identification
(c) 2014. The copyright of this document resides with its authors.
It may be distributed unchanged freely in print or electronic forms.© 2014. The copyright of this document resides with its authors. Existing approaches to person re-identification (re-id) are dominated by supervised learning based methods which focus on learning optimal similarity distance metrics. However, supervised learning based models require a large number of manually labelled pairs of person images across every pair of camera views. This thus limits their ability to scale to large camera networks. To overcome this problem, this paper proposes a novel unsupervised re-id modelling approach by exploring generative probabilistic topic modelling. Given abundant unlabelled data, our topic model learns to simultaneously both (1) discover localised person foreground appearance saliency (salient image patches) that are more informative for re-id matching, and (2) remove busy background clutters surrounding a person. Extensive experiments are carried out to demonstrate that the proposed model outperforms existing unsupervised learning re-id methods with significantly simplified model complexity. In the meantime, it still retains comparable re-id accuracy when compared to the state-of-the-art supervised re-id methods but without any need for pair-wise labelled training data
Unsupervised Person Re-identification by Soft Multilabel Learning
Although unsupervised person re-identification (RE-ID) has drawn increasing
research attentions due to its potential to address the scalability problem of
supervised RE-ID models, it is very challenging to learn discriminative
information in the absence of pairwise labels across disjoint camera views. To
overcome this problem, we propose a deep model for the soft multilabel learning
for unsupervised RE-ID. The idea is to learn a soft multilabel (real-valued
label likelihood vector) for each unlabeled person by comparing (and
representing) the unlabeled person with a set of known reference persons from
an auxiliary domain. We propose the soft multilabel-guided hard negative mining
to learn a discriminative embedding for the unlabeled target domain by
exploring the similarity consistency of the visual features and the soft
multilabels of unlabeled target pairs. Since most target pairs are cross-view
pairs, we develop the cross-view consistent soft multilabel learning to achieve
the learning goal that the soft multilabels are consistently good across
different camera views. To enable effecient soft multilabel learning, we
introduce the reference agent learning to represent each reference person by a
reference agent in a joint embedding. We evaluate our unified deep model on
Market-1501 and DukeMTMC-reID. Our model outperforms the state-of-the-art
unsupervised RE-ID methods by clear margins. Code is available at
https://github.com/KovenYu/MAR.Comment: CVPR19, ora
- …