4,755 research outputs found
Joint Generative and Contrastive Learning for Unsupervised Person Re-identification
Recent self-supervised contrastive learning provides an effective approach
for unsupervised person re-identification (ReID) by learning invariance from
different views (transformed versions) of an input. In this paper, we
incorporate a Generative Adversarial Network (GAN) and a contrastive learning
module into one joint training framework. While the GAN provides online data
augmentation for contrastive learning, the contrastive module learns
view-invariant features for generation. In this context, we propose a
mesh-based view generator. Specifically, mesh projections serve as references
towards generating novel views of a person. In addition, we propose a
view-invariant loss to facilitate contrastive learning between original and
generated views. Deviating from previous GAN-based unsupervised ReID methods
involving domain adaptation, we do not rely on a labeled source dataset, which
makes our method more flexible. Extensive experimental results show that our
method significantly outperforms state-of-the-art methods under both, fully
unsupervised and unsupervised domain adaptive settings on several large scale
ReID datsets.Comment: CVPR 2021. Source code: https://github.com/chenhao2345/GC
GeT: Generative Target Structure Debiasing for Domain Adaptation
Domain adaptation (DA) aims to transfer knowledge from a fully labeled source
to a scarcely labeled or totally unlabeled target under domain shift. Recently,
semi-supervised learning-based (SSL) techniques that leverage pseudo labeling
have been increasingly used in DA. Despite the competitive performance, these
pseudo labeling methods rely heavily on the source domain to generate pseudo
labels for the target domain and therefore still suffer considerably from
source data bias. Moreover, class distribution bias in the target domain is
also often ignored in the pseudo label generation and thus leading to further
deterioration of performance. In this paper, we propose GeT that learns a
non-bias target embedding distribution with high quality pseudo labels.
Specifically, we formulate an online target generative classifier to induce the
target distribution into distinctive Gaussian components weighted by their
class priors to mitigate source data bias and enhance target class
discriminability. We further propose a structure similarity regularization
framework to alleviate target class distribution bias and further improve
target class discriminability. Experimental results show that our proposed GeT
is effective and achieves consistent improvements under various DA settings
with and without class distribution bias. Our code is available at:
https://lulusindazc.github.io/getproject/.Comment: Accepted by ICCV202
Semi-Supervised and Unsupervised Deep Visual Learning: A Survey
State-of-the-art deep learning models are often trained with a large amountof costly labeled training data. However, requiring exhaustive manualannotations may degrade the model's generalizability in the limited-labelregime. Semi-supervised learning and unsupervised learning offer promisingparadigms to learn from an abundance of unlabeled visual data. Recent progressin these paradigms has indicated the strong benefits of leveraging unlabeleddata to improve model generalization and provide better model initialization.In this survey, we review the recent advanced deep learning algorithms onsemi-supervised learning (SSL) and unsupervised learning (UL) for visualrecognition from a unified perspective. To offer a holistic understanding ofthe state-of-the-art in these areas, we propose a unified taxonomy. Wecategorize existing representative SSL and UL with comprehensive and insightfulanalysis to highlight their design rationales in different learning scenariosand applications in different computer vision tasks. Lastly, we discuss theemerging trends and open challenges in SSL and UL to shed light on futurecritical research directions.<br
- …