854 research outputs found
Adversarial Domain Adaptation with Domain Mixup
Recent works on domain adaptation reveal the effectiveness of adversarial
learning on filling the discrepancy between source and target domains. However,
two common limitations exist in current adversarial-learning-based methods.
First, samples from two domains alone are not sufficient to ensure
domain-invariance at most part of latent space. Second, the domain
discriminator involved in these methods can only judge real or fake with the
guidance of hard label, while it is more reasonable to use soft scores to
evaluate the generated images or features, i.e., to fully utilize the
inter-domain information. In this paper, we present adversarial domain
adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a
more continuous latent space and guides the domain discriminator in judging
samples' difference relative to source and target domains. Domain mixup is
jointly conducted on pixel and feature level to improve the robustness of
models. Extensive experiments prove that the proposed approach can achieve
superior performance on tasks with various degrees of domain shift and data
complexity.Comment: Accepted as oral presentation at 34th AAAI Conference on Artificial
Intelligence, 202
Unsupervised Domain Adaptation for COVID-19 Information Service with Contrastive Adversarial Domain Mixup
In the real-world application of COVID-19 misinformation detection, a
fundamental challenge is the lack of the labeled COVID data to enable
supervised end-to-end training of the models, especially at the early stage of
the pandemic. To address this challenge, we propose an unsupervised domain
adaptation framework using contrastive learning and adversarial domain mixup to
transfer the knowledge from an existing source data domain to the target
COVID-19 data domain. In particular, to bridge the gap between the source
domain and the target domain, our method reduces a radial basis function (RBF)
based discrepancy between these two domains. Moreover, we leverage the power of
domain adversarial examples to establish an intermediate domain mixup, where
the latent representations of the input text from both domains could be mixed
during the training process. Extensive experiments on multiple real-world
datasets suggest that our method can effectively adapt misinformation detection
systems to the unseen COVID-19 target domain with significant improvements
compared to the state-of-the-art baselines
Semi-Supervised Learning by Augmented Distribution Alignment
In this work, we propose a simple yet effective semi-supervised learning
approach called Augmented Distribution Alignment. We reveal that an essential
sampling bias exists in semi-supervised learning due to the limited number of
labeled samples, which often leads to a considerable empirical distribution
mismatch between labeled data and unlabeled data. To this end, we propose to
align the empirical distributions of labeled and unlabeled data to alleviate
the bias. On one hand, we adopt an adversarial training strategy to minimize
the distribution distance between labeled and unlabeled data as inspired by
domain adaptation works. On the other hand, to deal with the small sample size
issue of labeled data, we also propose a simple interpolation strategy to
generate pseudo training samples. Those two strategies can be easily
implemented into existing deep neural networks. We demonstrate the
effectiveness of our proposed approach on the benchmark SVHN and CIFAR10
datasets. Our code is available at \url{https://github.com/qinenergy/adanet}.Comment: To appear in ICCV 201
Deep Domain Fusion for Adaptive Image Classification
abstract: Endowing machines with the ability to understand digital images is a critical task for a host of high-impact applications, including pathology detection in radiographic imaging, autonomous vehicles, and assistive technology for the visually impaired. Computer vision systems rely on large corpora of annotated data in order to train task-specific visual recognition models. Despite significant advances made over the past decade, the fact remains collecting and annotating the data needed to successfully train a model is a prohibitively expensive endeavor. Moreover, these models are prone to rapid performance degradation when applied to data sampled from a different domain. Recent works in the development of deep adaptation networks seek to overcome these challenges by facilitating transfer learning between source and target domains. In parallel, the unification of dominant semi-supervised learning techniques has illustrated unprecedented potential for utilizing unlabeled data to train classification models in defiance of discouragingly meager sets of annotated data.
In this thesis, a novel domain adaptation algorithm -- Domain Adaptive Fusion (DAF) -- is proposed, which encourages a domain-invariant linear relationship between the pixel-space of different domains and the prediction-space while being trained under a domain adversarial signal. The thoughtful combination of key components in unsupervised domain adaptation and semi-supervised learning enable DAF to effectively bridge the gap between source and target domains. Experiments performed on computer vision benchmark datasets for domain adaptation endorse the efficacy of this hybrid approach, outperforming all of the baseline architectures on most of the transfer tasks.Dissertation/ThesisMasters Thesis Computer Science 201
Transfer Learning with Optimal Transportation and Frequency Mixup for EEG-based Motor Imagery Recognition
Peer reviewedPublisher PD
- …