13,628 research outputs found
MeGA-CDA: Memory Guided Attention for Category-Aware Unsupervised Domain Adaptive Object Detection
Existing approaches for unsupervised domain adaptive object detection perform
feature alignment via adversarial training. While these methods achieve
reasonable improvements in performance, they typically perform
category-agnostic domain alignment, thereby resulting in negative transfer of
features. To overcome this issue, in this work, we attempt to incorporate
category information into the domain adaptation process by proposing Memory
Guided Attention for Category-Aware Domain Adaptation (MeGA-CDA). The proposed
method consists of employing category-wise discriminators to ensure
category-aware feature alignment for learning domain-invariant discriminative
features. However, since the category information is not available for the
target samples, we propose to generate memory-guided category-specific
attention maps which are then used to route the features appropriately to the
corresponding category discriminator. The proposed method is evaluated on
several benchmark datasets and is shown to outperform existing approaches.Comment: Accepted to CVPR 202
Deep Subdomain Adaptation Network for Image Classification
For a target task where labeled data is unavailable, domain adaptation can
transfer a learner from a different source domain. Previous deep domain
adaptation methods mainly learn a global domain shift, i.e., align the global
source and target distributions without considering the relationships between
two subdomains within the same category of different domains, leading to
unsatisfying transfer learning performance without capturing the fine-grained
information. Recently, more and more researchers pay attention to Subdomain
Adaptation which focuses on accurately aligning the distributions of the
relevant subdomains. However, most of them are adversarial methods which
contain several loss functions and converge slowly. Based on this, we present
Deep Subdomain Adaptation Network (DSAN) which learns a transfer network by
aligning the relevant subdomain distributions of domain-specific layer
activations across different domains based on a local maximum mean discrepancy
(LMMD). Our DSAN is very simple but effective which does not need adversarial
training and converges fast. The adaptation can be achieved easily with most
feed-forward network models by extending them with LMMD loss, which can be
trained efficiently via back-propagation. Experiments demonstrate that DSAN can
achieve remarkable results on both object recognition tasks and digit
classification tasks. Our code will be available at:
https://github.com/easezyc/deep-transfer-learningComment: published on TNNL
Deep Cocktail Network: Multi-source Unsupervised Domain Adaptation with Category Shift
Unsupervised domain adaptation (UDA) conventionally assumes labeled source
samples coming from a single underlying source distribution. Whereas in
practical scenario, labeled data are typically collected from diverse sources.
The multiple sources are different not only from the target but also from each
other, thus, domain adaptater should not be modeled in the same way. Moreover,
those sources may not completely share their categories, which further brings a
new transfer challenge called category shift. In this paper, we propose a deep
cocktail network (DCTN) to battle the domain and category shifts among multiple
sources. Motivated by the theoretical results in \cite{mansour2009domain}, the
target distribution can be represented as the weighted combination of source
distributions, and, the multi-source unsupervised domain adaptation via DCTN is
then performed as two alternating steps: i) It deploys multi-way adversarial
learning to minimize the discrepancy between the target and each of the
multiple source domains, which also obtains the source-specific perplexity
scores to denote the possibilities that a target sample belongs to different
source domains. ii) The multi-source category classifiers are integrated with
the perplexity scores to classify target sample, and the pseudo-labeled target
samples together with source samples are utilized to update the multi-source
category classifier and the feature extractor. We evaluate DCTN in three domain
adaptation benchmarks, which clearly demonstrate the superiority of our
framework.Comment: Accepted for publication in Conference on Computer Vision and Pattern
Recognition(CVPR), 201
Domain-Symmetric Networks for Adversarial Domain Adaptation
Unsupervised domain adaptation aims to learn a model of classifier for
unlabeled samples on the target domain, given training data of labeled samples
on the source domain. Impressive progress is made recently by learning
invariant features via domain-adversarial training of deep networks. In spite
of the recent progress, domain adaptation is still limited in achieving the
invariance of feature distributions at a finer category level. To this end, we
propose in this paper a new domain adaptation method called Domain-Symmetric
Networks (SymNets). The proposed SymNet is based on a symmetric design of
source and target task classifiers, based on which we also construct an
additional classifier that shares with them its layer neurons. To train the
SymNet, we propose a novel adversarial learning objective whose key design is
based on a two-level domain confusion scheme, where the category-level
confusion loss improves over the domain-level one by driving the learning of
intermediate network features to be invariant at the corresponding categories
of the two domains. Both domain discrimination and domain confusion are
implemented based on the constructed additional classifier. Since target
samples are unlabeled, we also propose a scheme of cross-domain training to
help learn the target classifier. Careful ablation studies show the efficacy of
our proposed method. In particular, based on commonly used base networks, our
SymNets achieve the new state of the art on three benchmark domain adaptation
datasets.Comment: CVPR 2019 camera ready. Codes are available at:
https://github.com/YBZh/SymNet
Progressive Feature Alignment for Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) transfers knowledge from a label-rich
source domain to a fully-unlabeled target domain. To tackle this task, recent
approaches resort to discriminative domain transfer in virtue of pseudo-labels
to enforce the class-level distribution alignment across the source and target
domains. These methods, however, are vulnerable to the error accumulation and
thus incapable of preserving cross-domain category consistency, as the
pseudo-labeling accuracy is not guaranteed explicitly. In this paper, we
propose the Progressive Feature Alignment Network (PFAN) to align the
discriminative features across domains progressively and effectively, via
exploiting the intra-class variation in the target domain. To be specific, we
first develop an Easy-to-Hard Transfer Strategy (EHTS) and an Adaptive
Prototype Alignment (APA) step to train our model iteratively and
alternatively. Moreover, upon observing that a good domain adaptation usually
requires a non-saturated source classifier, we consider a simple yet efficient
way to retard the convergence speed of the source classification loss by
further involving a temperature variate into the soft-max function. The
extensive experimental results reveal that the proposed PFAN exceeds the
state-of-the-art performance on three UDA datasets.Comment: Accepted by CVPR 201
Unsupervised Open Domain Recognition by Semantic Discrepancy Minimization
We address the unsupervised open domain recognition (UODR) problem, where
categories in labeled source domain S is only a subset of those in unlabeled
target domain T. The task is to correctly classify all samples in T including
known and unknown categories. UODR is challenging due to the domain
discrepancy, which becomes even harder to bridge when a large number of unknown
categories exist in T. Moreover, the classification rules propagated by graph
CNN (GCN) may be distracted by unknown categories and lack generalization
capability. To measure the domain discrepancy for asymmetric label space
between S and T, we propose Semantic-Guided Matching Discrepancy (SGMD), which
first employs instance matching between S and T, and then the discrepancy is
measured by a weighted feature distance between matched instances. We further
design a limited balance constraint to achieve a more balanced classification
output on known and unknown categories. We develop Unsupervised Open Domain
Transfer Network (UODTN), which learns both the backbone classification network
and GCN jointly by reducing the SGMD, enforcing the limited balance constraint
and minimizing the classification loss on S. UODTN better preserves the
semantic structure and enforces the consistency between the learned domain
invariant visual features and the semantic embeddings. Experimental results
show superiority of our method on recognizing images of both known and unknown
categories.Comment: Accepted to CVPR 2019, 10 pages, 4 figure
Transfer Adaptation Learning: A Decade Survey
The world we see is ever-changing and it always changes with people, things,
and the environment. Domain is referred to as the state of the world at a
certain moment. A research problem is characterized as transfer adaptation
learning (TAL) when it needs knowledge correspondence between different
moments/domains. Conventional machine learning aims to find a model with the
minimum expected risk on test data by minimizing the regularized empirical risk
on the training data, which, however, supposes that the training and test data
share similar joint probability distribution. TAL aims to build models that can
perform tasks of target domain by learning knowledge from a semantic related
but distribution different source domain. It is an energetic research filed of
increasing influence and importance, which is presenting a blowout publication
trend. This paper surveys the advances of TAL methodologies in the past decade,
and the technical challenges and essential problems of TAL have been observed
and discussed with deep insights and new perspectives. Broader solutions of
transfer adaptation learning being created by researchers are identified, i.e.,
instance re-weighting adaptation, feature adaptation, classifier adaptation,
deep network adaptation and adversarial adaptation, which are beyond the early
semi-supervised and unsupervised split. The survey helps researchers rapidly
but comprehensively understand and identify the research foundation, research
status, theoretical limitations, future challenges and under-studied issues
(universality, interpretability, and credibility) to be broken in the field
toward universal representation and safe applications in open-world scenarios.Comment: 26 pages, 4 figure
Adversarial Transfer Learning for Cross-domain Visual Recognition
In many practical visual recognition scenarios, feature distribution in the
source domain is generally different from that of the target domain, which
results in the emergence of general cross-domain visual recognition problems.
To address the problems of visual domain mismatch, we propose a novel
semi-supervised adversarial transfer learning approach, which is called Coupled
adversarial transfer Domain Adaptation (CatDA), for distribution alignment
between two domains. The proposed CatDA approach is inspired by cycleGAN, but
leveraging multiple shallow multilayer perceptrons (MLPs) instead of deep
networks. Specifically, our CatDA comprises of two symmetric and slim
sub-networks, such that the coupled adversarial learning framework is
formulated. With such symmetry of two generators, the input data from
source/target domain can be fed into the MLP network for target/source domain
generation, supervised by two confrontation oriented coupled discriminators.
Notably, in order to avoid the critical flaw of high-capacity of the feature
extraction function during domain adversarial training, domain specific loss
and domain knowledge fidelity loss are proposed in each generator, such that
the effectiveness of the proposed transfer network is guaranteed. Additionally,
the essential difference from cycleGAN is that our method aims to generate
domain-agnostic and aligned features for domain adaptation and transfer
learning rather than synthesize realistic images. We show experimentally on a
number of benchmark datasets and the proposed approach achieves competitive
performance over state-of-the-art domain adaptation and transfer learning
approaches
Adversarial Domain Adaptation Being Aware of Class Relationships
Adversarial training is a useful approach to promote the learning of
transferable representations across the source and target domains, which has
been widely applied for domain adaptation (DA) tasks based on deep neural
networks. Until very recently, existing adversarial domain adaptation (ADA)
methods ignore the useful information from the label space, which is an
important factor accountable for the complicated data distributions associated
with different semantic classes. Especially, the inter-class semantic
relationships have been rarely considered and discussed in the current work of
transfer learning. In this paper, we propose a novel relationship-aware
adversarial domain adaptation (RADA) algorithm, which first utilizes a single
multi-class domain discriminator to enforce the learning of inter-class
dependency structure during domain-adversarial training and then aligns this
structure with the inter-class dependencies that are characterized from
training the label predictor on source domain. Specifically, we impose a
regularization term to penalize the structure discrepancy between the
inter-class dependencies respectively estimated from domain discriminator and
label predictor. Through this alignment, our proposed method makes the
adversarial domain adaptation aware of the class relationships. Empirical
studies show that the incorporation of class relationships significantly
improves the performance on benchmark datasets
Domain Adversarial Reinforcement Learning for Partial Domain Adaptation
Partial domain adaptation aims to transfer knowledge from a label-rich source
domain to a label-scarce target domain which relaxes the fully shared label
space assumption across different domains. In this more general and practical
scenario, a major challenge is how to select source instances in the shared
classes across different domains for positive transfer. To address this issue,
we propose a Domain Adversarial Reinforcement Learning (DARL) framework to
automatically select source instances in the shared classes for circumventing
negative transfer as well as to simultaneously learn transferable features
between domains by reducing the domain shift. Specifically, in this framework,
we employ deep Q-learning to learn policies for an agent to make selection
decisions by approximating the action-value function. Moreover, domain
adversarial learning is introduced to learn domain-invariant features for the
selected source instances by the agent and the target instances, and also to
determine rewards for the agent based on how relevant the selected source
instances are to the target domain. Experiments on several benchmark datasets
demonstrate that the superior performance of our DARL method over existing
state of the arts for partial domain adaptation
- …