4,710 research outputs found
Joint Generative and Contrastive Learning for Unsupervised Person Re-identification
Recent self-supervised contrastive learning provides an effective approach
for unsupervised person re-identification (ReID) by learning invariance from
different views (transformed versions) of an input. In this paper, we
incorporate a Generative Adversarial Network (GAN) and a contrastive learning
module into one joint training framework. While the GAN provides online data
augmentation for contrastive learning, the contrastive module learns
view-invariant features for generation. In this context, we propose a
mesh-based view generator. Specifically, mesh projections serve as references
towards generating novel views of a person. In addition, we propose a
view-invariant loss to facilitate contrastive learning between original and
generated views. Deviating from previous GAN-based unsupervised ReID methods
involving domain adaptation, we do not rely on a labeled source dataset, which
makes our method more flexible. Extensive experimental results show that our
method significantly outperforms state-of-the-art methods under both, fully
unsupervised and unsupervised domain adaptive settings on several large scale
ReID datsets.Comment: CVPR 2021. Source code: https://github.com/chenhao2345/GC
μ£μ§ μ₯λΉλ₯Ό μν νμ λ λ°μ΄ν°λ₯Ό κ°μ§λ λ₯λ¬λ λΉμ μ΄ν리μΌμ΄μ μ λΉ λ₯Έ μ μ
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : 곡과λν μ»΄ν¨ν°κ³΅νλΆ, 2022.2. μ μΉμ£Ό.λ₯ λ¬λ κΈ°λ° λ°©λ²μ λλΌμ΄ μ±κ³΅μ μ£Όλ‘ λ§μ μμ λΆλ₯λ λ°μ΄ν°λ‘ λ¬μ±λμλ€. μ ν΅μ μΈ κΈ°κ³ νμ΅ λ°©λ²κ³Ό λΉκ΅ν΄μ λ₯λ¬λ λ°©λ²μ μμ£Ό ν° λ°μ΄ν°μ
μΌλ‘λΆν° μ’μ μ±λ₯μ κ°μ§ λͺ¨λΈμ νμ΅ν μ μλ€. νμ§λ§ κ³ νμ§μ λΆλ₯λ λ°μ΄ν°λ λ§λ€κΈ° μ΄λ ΅κ³ νλΌμ΄λ²μ λ¬Έμ λ‘ λ§λ€ μ μμ λλ μλ€. κ²λ€κ° μ¬λμ μμ£Ό ν° λΆλ₯λ λ°μ΄ν°κ° μμ΄λ νλ₯ν μΌλ°ν λ₯λ ₯μ 보μ¬μ€λ€.
μ£μ§ μ₯λΉλ μλ²μ λΉκ΅ν΄μ μ νμ μΈ κ³μ° λ₯λ ₯μ κ°μ§λ€. νΉν νμ΅ κ³Όμ μ μ£μ§ μ₯λΉμμ μννλ κ²μ λ§€μ° μ΄λ ΅λ€. νμ§λ§, λλ©μΈ λ³ν λ¬Έμ μ νλΌμ΄λ²μ λ¬Έμ λ₯Ό κ³ λ €νμ λ μ£μ§ μ₯λΉμμ νμ΅ κ³Όμ μ μννλ κ²μ λ°λμ§νλ€. λ³Έ λ
Όλ¬Έμμλ κ³μ°λ₯λ ₯μ΄ μμ μ£μ§ μ₯λΉλ₯Ό μν΄ μ μ κ³Όμ μ μ ν΅μ μΈ νμ΅ κ³Όμ λμ κ³ λ €νλ€.
μ ν΅μ μΈ λΆλ₯ λ¬Έμ λ νμ΅ λ°μ΄ν°μ ν
μ€νΈ λ°μ΄ν°κ° λμΌν λΆν¬μμ νμλμμκ³Ό λ§μ μμ νμ΅ λ°μ΄ν°λ₯Ό κ°μ νλ€. λΉμ§λ λλ©μΈ μ΄λν
μ΄μ
μ ν
μ€νΈ λ°μ΄ν°κ° νμ΅λ°μ΄ν°μ λ€λ₯Έ λΆν¬μμ νμλλ μν©μ κ°μ νλ©° κΈ°μ‘΄μ λΆλ₯λ λ°μ΄ν°μ νμ΅λ λͺ¨λΈμ μ΄μ©ν΄ μλ‘μ΄ λ°μ΄ν°λ₯Ό λΆλ₯νλ λ¬Έμ μ΄λ€. ν¨μ· νμ΅μ μ μ μμ νμ΅ λ°μ΄ν°λ₯Ό κ°μ νλ©° μμμ λΆλ₯λ λ°μ΄ν°λ§μ κ°μ§κ³ μλ‘μ΄ λ°μ΄ν°λ₯Ό λΆλ₯νλ λ¬Έμ μ΄λ€. μ£μ§ μ₯λΉλ₯Ό μν΄ μ΄λ―Έμ§λ·μμ 미리 νμ΅λ λͺ¨λΈμ ν΅ν΄ λΉμ§λ λλ©μΈ μ΄λν
μ΄μ
μ±λ₯μ κ°ννλ λ°©λ²κ³Ό μ§λ 컨νΈλΌμ€ν°λΈ νμ΅μ ν΅ν΄ ν¨μ· νμ΅ μ±λ₯μ κ°ννλ λ°©λ²μ μ μνμλ€. λ λ°©λ²μ λͺ¨λ μ μ λΆλ₯λ λ°μ΄ν° λ¬Έμ λ₯Ό λ€λ£¨λ©° λ€λ§ μλ‘ λ€λ₯Έ μλ리μ€λ₯Ό κ°μ νλ€.
첫 λ²μ§Έ λ°©λ²μ μ£μ§ μ₯λΉλ₯Ό μν΄ λ€νΈμν¬ λͺ¨λΈκ³Ό νλΌλ―Έν° μ νμ λμ μ΅μ νλ₯Ό ν΅ν΄ λΉμ§λ λλ©μΈ μ΄λν
μ΄μ
μ±λ₯μ κ°ννλ λ°©λ²μ΄λ€. μ΄λ―Έμ§λ·μμ 미리 νμ΅λ λͺ¨λΈμ Office λ°μ΄ν°μ
κ³Ό κ°μ΄ μμ λ°μ΄ν°μ
μ λ€λ£°λ λ§€μ° μ€μνλ€. νΉμ§ μΆμΆκΈ°λ₯Ό κ°±μ νμ§ μλ λΉμ§λ λλ©μΈ μ΄λν
μ΄μ
μκ³ λ¦¬μ¦μ μ¬μ©νκ³ μμ£Ό ν° μ΄λ―Έμ§λ·μμ 미리 νμ΅λ λͺ¨λΈμ μ‘°ν©νλ λ°©λ²μΌλ‘ λμ μ νλλ₯Ό μ»μ μ μλ€. λ λμκ° μ£μ§ μ₯λΉλ₯Ό μν΄ μκ³ κ°λ²Όμ΄ μ΄λ―Έμ§λ·μμ 미리 νμ΅λ λͺ¨λΈμ μ€ννμλ€. μ§μ°μκ°μ μ€μ΄κΈ° μν΄ μμΈ‘κΈ°λ₯Ό λμ
ν μ§ν μκ³ λ¦¬μ¦μΌλ‘ λ°©λ²μ μμλΆν° λκΉμ§ μ΅μ ννμλ€. κ·Έλ¦¬κ³ νλΌμ΄λ²μλ₯Ό μ§ν€κΈ° μν λΉμ§λ λλ©μΈ μ΄λν
μ΄μ
μλ리μ€μ λν΄ κ³ λ €νμλ€. λν μ£μ§ μ₯λΉμμ μ’ λ νμ€μ μΈ μλ리μ€μΈ μμ λ°μ΄ν°μ
κ³Ό object detection μ λν΄μλ μ€ννμλ€. λ§μ§λ§μΌλ‘ μ°μμ μΈ λ°μ΄ν°κ° μ
λ ₯λ λ μ€κ° λ°μ΄ν°λ₯Ό νμ©νμ¬ μ§μ°μκ°μ λ κ°μμν€λ λ°©λ²μ μ€ννμλ€. Office31κ³Ό Office-Home λ°μ΄ν°μ
μ λν΄ κ°κ° 5.99λ°°μ 9.06λ°° μ§μ°μκ° κ°μλ₯Ό λ¬μ±νμλ€.
λ λ²μ§Έ λ°©λ²μ μ§λ 컨νΈλΌμ€ν°λΈ νμ΅μ ν΅ν΄ ν¨μ· νμ΅ μ±λ₯μ κ°ννλ λ°©λ²μ΄λ€. ν¨μ· νμ΅ λ²€μΉλ§ν¬μμλ λ² μ΄μ€ λ°μ΄ν°μ
μΌλ‘ νΉμ§ μΆμΆκΈ°λ₯Ό νμ΅νκΈ° λλ¬Έμ μ΄λ―Έμ§λ·μμ 미리 νμ΅λ λͺ¨λΈμ μ¬μ©ν μ μλ€. λμ μ, μ§λ 컨νΈλΌμ€ν°λΈ νμ΅μ ν΅ν΄ νΉμ§ μΆμΆκΈ°λ₯Ό κ°ννλ€. μ§λ 컨νΈλΌμ€ν°λΈ νμ΅κ³Ό μ 보 μ΅λν κ·Έλ¦¬κ³ νλ‘ν νμ
μΆμ λ°©λ²μ μ‘°ν©νμ¬ μμ£Ό λμ μ νλλ₯Ό μ»μ μ μλ€. νΉμ§ μΆμΆκΈ°μ 미리 λλ΄κΈ°λ₯Ό ν΅ν΄ μ΄λ κ² μ»μ μ νλλ₯Ό μνμκ° κ°μλ‘ λ°κΏ μ μλ€.
νΈλμ€λν°λΈ 5-μ¨μ΄ 5-μ· νμ΅ μλ리μ€μμ 3.87λ°° μ§μ°μκ° κ°μλ₯Ό λ¬μ±νμλ€.
λ³Έ λ°©λ²μ μ νλλ₯Ό μ¦κ°μν¨ ν μ§μ°μκ°μ κ°μμν€λ λ°©λ²μΌλ‘ μμ½ν μ μλ€. λ¨Όμ μ΄λ―Έμ§λ·μμ 미리 νμ΅λ λͺ¨λΈμ μ°κ±°λ μ§λ 컨νΈλΌμ€ν°λΈ νμ΅μ ν΅ν΄ νΉμ§ μΆμΆκΈ°λ₯Ό κ°νν΄μ λμ μ νλλ₯Ό μ»λλ€. κ·Έ ν μ§ν μκ³ λ¦¬μ¦μ ν΅ν΄ μμλΆν° λκΉμ§ μ΅μ ννκ±°λ 미리 λλ΄κΈ°λ₯Ό ν΅ν΄ μ§μ°μκ°μ μ€μΈλ€. μ νλλ₯Ό μ¦κ°μν¨ ν μ§μ°μκ°μ κ°μμν€λ λ λ¨κ³ μ κ·Ό λ°©μμ μ£μ§ μ₯λΉλ₯Ό μν νμ λ λ°μ΄ν°λ₯Ό κ°μ§λ λ₯λ¬λ λΉμ μ΄ν리μΌμ΄μ
μ λΉ λ₯Έ μ μμ λ¬μ±νλλ° μΆ©λΆνλ€.The remarkable success of deep learning-based methods are mainly accomplished by a large amount of labeled data. Compared to conventional machine learning methods, deep learning-based methods are able to learn high quality model with a large dataset size. However, high-quality labeled data is expensive to obtain and sometimes preparing a large dataset is impossible due to privacy concern. Furthermore, human shows outstanding generalization performance without a huge amount of labeled data.
Edge devices have a limited capability in computation compared to servers. Especially, it is challenging to implement training on edge devices. However, training on edge device is desirable when considering domain-shift problem and privacy concern. In this dissertation, I consider adaptation process as a conventional training counterpart for low computation capability edge device.
Conventional classification assumes that training data and test data are drawn from the same distribution and training dataset is large. Unsupervised domain adaptation addresses the problem when training data and test data are drawn from different distribution and it is a problem to label target domain data using already existing labeled data and models. Few-shot learning assumes small training dataset and it is a task to predict new data based on only a few labeled data. I present 1) co-optimization of backbone network and parameter selection in unsupervised domain adaptation for edge device and 2) augmenting few-shot learning with supervised contrastive learning. Both methods are targeting low labeled data regime but different scenarios.
The first method is to boost unsupervised domain adaptation by co-optimization of backbone network and parameter selection for edge device. Pre-trained ImageNet models are crucial when dealing with small dataset such as Office datasets. By using unsupervised domain adaptation algorithm that does not update feature extractor, large and powerful pre-trained ImageNet models can be used to boost the accuracy. We report state-of-the-art accuracy result with the method. Moreover, we conduct an experiment to use small and lightweight pre-trained ImageNet models for edge device. Co-optimization is performed to reduce the total latency by using predictor-guided evolutionary search. We also consider pre-extraction of source feature. We conduct more realistic scenario for edge device such as smaller target domain data and object detection. Lastly, We conduct an experiment to utilize intermediate domain data to reduce the algorithm latency further.
We achieve 5.99x and 9.06x latency reduction on Office31 and Office-Home dataset, respectively.
The second method is to augment few-shot learning with supervised contrastive learning. We cannot use pre-trained ImageNet model in the few-shot learning benchmark scenario as they provide base dataset to train the feature extractor from scratch. Instead, we augment the feature extractor with supervised contrastive learning method. Combining supervised contrastive learning with information maximization and prototype estimation technique, we report state-of-the-art accuracy result with the method. Then, we translate the accuracy gain to total runtime reduction by changing the feature extractor and early stopping. We achieve 3.87x latency reduction for transductive 5-way 5-shot learning scenarios.
Our approach can be summarized as boosting the accuracy followed by latency reduction. We first upgrade the feature extractor by using more advanced pre-trained ImageNet model or by supervised contrastive learning to achieve state-of-the-art accuracy. Then, we optimize the method end-to-end with evolutionary search or early stopping to reduce the latency. Our two stage approach which consists of accuracy boosting and latency reduction is sufficient to achieve fast adaptation of deep learning vision applications with limited data for edge device.1. Introduction 1
2. Background 7
2.1 Dataset Size for Vision Applications 7
2.2 ImageNet Pre-trained Models 9
2.3 Augmentation Methods for ImageNet 12
2.4 Contrastive Learning 14
3. Problem Definitions and Solutions Overview 17
3.1 Problem Definitions 17
3.1.1 Unsupervised Domain Adaptation 17
3.1.2 Few-shot learning 18
3.2 Solutions overview 19
3.2.1 Co-optimization of Backbone Network and Parameter Selection in Unsupervised Domain Adaptation for Edge Device 20
3.2.2 Augmenting Few-Shot Learning with Supervised Contrastive Learning 21
4. Co-optimization of Backbone Network and Parameter Selection in Unsupervised Domain Adaptation for Edge Device 22
4.1 Introduction 23
4.2 Related Works 28
4.3 Methodology 33
4.3.1 Examining an Unsupervised Domain Adaptation Method 33
4.3.2 Boosting Accuracy with Pre-Trained ImageNet Models 36
4.3.3 Boosting Accuracy for Edge Device 38
4.3.4 Co-optimization of Backbone Network and Parameter Selection 39
4.4 Experiments 41
4.4.1 ImageNet and Unsupervised Domain Adaptation Accuracy 43
4.4.2 Accuracy with Once-For-All Network 52
4.4.3 Comparison with State-of-the-Art Results 58
4.4.4 Co-optimization for Edge Device 59
4.4.5 Pre-extraction of Source Feature 72
4.4.6 Results for Small Target Data Scenario 77
4.4.7 Results for Object Detection 78
4.4.8 Results for Classifier Fitting Using Intermediate Domain 80
4.4.9 Summary 81
4.5 Conclusion 84
5. Augmenting Few-Shot Learning with Supervised Contrastive Learning 85
5.1 Introduction 86
5.2 Related Works 89
5.3 Methodology 92
5.3.1 Examining A Few-shot Learning Method 92
5.3.2 Augmenting Few-shot Learning with Supervised Contrastive Learning 94
5.4 Experiments 97
5.4.1 Comparison to the State-of-the-Art 99
5.4.2 Ablation Study 102
5.4.3 Domain-Shift 105
5.4.4 Increasing the Number of Ways 106
5.4.5 Runtime Analysis 107
5.4.6 Limitations 109
5.5 Conclusion 110
6. Conclusion 111λ°
MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation
Domain shift has been a long-standing issue for medical image segmentation.
Recently, unsupervised domain adaptation (UDA) methods have achieved promising
cross-modality segmentation performance by distilling knowledge from a
label-rich source domain to a target domain without labels. In this work, we
propose a multi-scale self-ensembling based UDA framework for automatic
segmentation of two key brain structures i.e., Vestibular Schwannoma (VS) and
Cochlea on high-resolution T2 images. First, a segmentation-enhanced
contrastive unpaired image translation module is designed for image-level
domain adaptation from source T1 to target T2. Next, multi-scale deep
supervision and consistency regularization are introduced to a mean teacher
network for self-ensemble learning to further close the domain gap.
Furthermore, self-training and intensity augmentation techniques are utilized
to mitigate label scarcity and boost cross-modality segmentation performance.
Our method demonstrates promising segmentation performance with a mean Dice
score of 83.8% and 81.4% and an average asymmetric surface distance (ASSD) of
0.55 mm and 0.26 mm for the VS and Cochlea, respectively in the validation
phase of the crossMoDA 2022 challenge.Comment: Accepted by BrainLes MICCAI proceedings (5th solution for MICCAI 2022
Cross-Modality Domain Adaptation (crossMoDA) Challenge
Unsupervised Adaptation of Polyp Segmentation Models via Coarse-to-Fine Self-Supervision
Unsupervised Domain Adaptation~(UDA) has attracted a surge of interest over
the past decade but is difficult to be used in real-world applications.
Considering the privacy-preservation issues and security concerns, in this
work, we study a practical problem of Source-Free Domain Adaptation (SFDA),
which eliminates the reliance on annotated source data. Current SFDA methods
focus on extracting domain knowledge from the source-trained model but neglects
the intrinsic structure of the target domain. Moreover, they typically utilize
pseudo labels for self-training in the target domain, but suffer from the
notorious error accumulation problem. To address these issues, we propose a new
SFDA framework, called Region-to-Pixel Adaptation Network~(RPANet), which
learns the region-level and pixel-level discriminative representations through
coarse-to-fine self-supervision. The proposed RPANet consists of two modules,
Foreground-aware Contrastive Learning (FCL) and Confidence-Calibrated
Pseudo-Labeling (CCPL), which explicitly address the key challenges of ``how to
distinguish'' and ``how to refine''. To be specific, FCL introduces a
supervised contrastive learning paradigm in the region level to contrast
different region centroids across different target images, which efficiently
involves all pseudo labels while robust to noisy samples. CCPL designs a novel
fusion strategy to reduce the overconfidence problem of pseudo labels by fusing
two different target predictions without introducing any additional network
modules. Extensive experiments on three cross-domain polyp segmentation tasks
reveal that RPANet significantly outperforms state-of-the-art SFDA and UDA
methods without access to source data, revealing the potential of SFDA in
medical applications.Comment: Accepted by IPMI 202
- β¦