15 research outputs found
Exploiting Sample Uncertainty for Domain Adaptive Person Re-Identification
Many unsupervised domain adaptive (UDA) person re-identification (ReID)
approaches combine clustering-based pseudo-label prediction with feature
fine-tuning. However, because of domain gap, the pseudo-labels are not always
reliable and there are noisy/incorrect labels. This would mislead the feature
representation learning and deteriorate the performance. In this paper, we
propose to estimate and exploit the credibility of the assigned pseudo-label of
each sample to alleviate the influence of noisy labels, by suppressing the
contribution of noisy samples. We build our baseline framework using the mean
teacher method together with an additional contrastive loss. We have observed
that a sample with a wrong pseudo-label through clustering in general has a
weaker consistency between the output of the mean teacher model and the student
model. Based on this finding, we propose to exploit the uncertainty (measured
by consistency levels) to evaluate the reliability of the pseudo-label of a
sample and incorporate the uncertainty to re-weight its contribution within
various ReID losses, including the identity (ID) classification loss per
sample, the triplet loss, and the contrastive loss. Our uncertainty-guided
optimization brings significant improvement and achieves the state-of-the-art
performance on benchmark datasets.Comment: 9 pages. Accepted to 35th AAAI Conference on Artificial Intelligence
(AAAI 2021