87 research outputs found
SODA: Detecting Covid-19 in Chest X-rays with Semi-supervised Open Set Domain Adaptation
Due to the shortage of COVID-19 viral testing kits and the long waiting time,
radiology imaging is used to complement the screening process and triage
patients into different risk levels. Deep learning based methods have taken an
active role in automatically detecting COVID-19 disease in chest x-ray images,
as witnessed in many recent works in early 2020. Most of these works first
train a Convolutional Neural Network (CNN) on an existing large-scale chest
x-ray image dataset and then fine-tune it with a COVID-19 dataset at a much
smaller scale. However, direct transfer across datasets from different domains
may lead to poor performance for CNN due to two issues, the large domain shift
present in the biomedical imaging datasets and the extremely small scale of the
COVID-19 chest x-ray dataset. In an attempt to address these two important
issues, we formulate the problem of COVID-19 chest x-ray image classification
in a semi-supervised open set domain adaptation setting and propose a novel
domain adaptation method, Semi-supervised Open set Domain Adversarial network
(SODA). SODA is able to align the data distributions across different domains
in a general domain space and also in a common subspace of source and target
data. In our experiments, SODA achieves a leading classification performance
compared with recent state-of-the-art models in separating COVID-19 with common
pneumonia. We also present initial results showing that SODA can produce better
pathology localizations in the chest x-rays.Comment: BIOKDD 2020: 19th International Workshop on Data Mining in
Bioinformatic
Selective Transfer with Reinforced Transfer Network for Partial Domain Adaptation
One crucial aspect of partial domain adaptation (PDA) is how to select the
relevant source samples in the shared classes for knowledge transfer. Previous
PDA methods tackle this problem by re-weighting the source samples based on
their high-level information (deep features). However, since the domain shift
between source and target domains, only using the deep features for sample
selection is defective. We argue that it is more reasonable to additionally
exploit the pixel-level information for PDA problem, as the appearance
difference between outlier source classes and target classes is significantly
large. In this paper, we propose a reinforced transfer network (RTNet), which
utilizes both high-level and pixel-level information for PDA problem. Our RTNet
is composed of a reinforced data selector (RDS) based on reinforcement learning
(RL), which filters out the outlier source samples, and a domain adaptation
model which minimizes the domain discrepancy in the shared label space.
Specifically, in the RDS, we design a novel reward based on the reconstruct
errors of selected source samples on the target generator, which introduces the
pixel-level information to guide the learning of RDS. Besides, we develope a
state containing high-level information, which used by the RDS for sample
selection. The proposed RDS is a general module, which can be easily integrated
into existing DA models to make them fit the PDA situation. Extensive
experiments indicate that RTNet can achieve state-of-the-art performance for
PDA tasks on several benchmark datasets
TWINs: Two Weighted Inconsistency-reduced Networks for Partial Domain Adaptation
The task of unsupervised domain adaptation is proposed to transfer the
knowledge of a label-rich domain (source domain) to a label-scarce domain
(target domain). Matching feature distributions between different domains is a
widely applied method for the aforementioned task. However, the method does not
perform well when classes in the two domains are not identical. Specifically,
when the classes of the target correspond to a subset of those of the source,
target samples can be incorrectly aligned with the classes that exist only in
the source. This problem setting is termed as partial domain adaptation (PDA).
In this study, we propose a novel method called Two Weighted
Inconsistency-reduced Networks (TWINs) for PDA. We utilize two classification
networks to estimate the ratio of the target samples in each class with which a
classification loss is weighted to adapt the classes present in the target
domain. Furthermore, to extract discriminative features for the target, we
propose to minimize the divergence between domains measured by the classifiers'
inconsistency on target samples. We empirically demonstrate that reducing the
inconsistency between two networks is effective for PDA and that our method
outperforms other existing methods with a large margin in several datasets
Learning to Transfer Examples for Partial Domain Adaptation
Domain adaptation is critical for learning in new and unseen environments.
With domain adversarial training, deep networks can learn disentangled and
transferable features that effectively diminish the dataset shift between the
source and target domains for knowledge transfer. In the era of Big Data, the
ready availability of large-scale labeled datasets has stimulated wide interest
in partial domain adaptation (PDA), which transfers a recognizer from a labeled
large domain to an unlabeled small domain. It extends standard domain
adaptation to the scenario where target labels are only a subset of source
labels. Under the condition that target labels are unknown, the key challenge
of PDA is how to transfer relevant examples in the shared classes to promote
positive transfer, and ignore irrelevant ones in the specific classes to
mitigate negative transfer. In this work, we propose a unified approach to PDA,
Example Transfer Network (ETN), which jointly learns domain-invariant
representations across the source and target domains, and a progressive
weighting scheme that quantifies the transferability of source examples while
controlling their importance to the learning task in the target domain. A
thorough evaluation on several benchmark datasets shows that our approach
achieves state-of-the-art results for partial domain adaptation tasks.Comment: CVPR 2019 accepte
Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation
Partial domain adaptation (PDA) attracts appealing attention as it deals with
a realistic and challenging problem when the source domain label space
substitutes the target domain. Most conventional domain adaptation (DA) efforts
concentrate on learning domain-invariant features to mitigate the distribution
disparity across domains. However, it is crucial to alleviate the negative
influence caused by the irrelevant source domain categories explicitly for PDA.
In this work, we propose an Adaptively-Accumulated Knowledge Transfer framework
(AKT) to align the relevant categories across two domains for effective
domain adaptation. Specifically, an adaptively-accumulated mechanism is
explored to gradually filter out the most confident target samples and their
corresponding source categories, promoting positive transfer with more
knowledge across two domains. Moreover, a dual distinct classifier architecture
consisting of a prototype classifier and a multilayer perceptron classifier is
built to capture intrinsic data distribution knowledge across domains from
various perspectives. By maximizing the inter-class center-wise discrepancy and
minimizing the intra-class sample-wise compactness, the proposed model is able
to obtain more domain-invariant and task-specific discriminative
representations of the shared categories data. Comprehensive experiments on
several partial domain adaptation benchmarks demonstrate the effectiveness of
our proposed model, compared with the state-of-the-art PDA methods
Class Conditional Alignment for Partial Domain Adaptation
Adversarial adaptation models have demonstrated significant progress towards
transferring knowledge from a labeled source dataset to an unlabeled target
dataset. Partial domain adaptation (PDA) investigates the scenarios in which
the source domain is large and diverse, and the target label space is a subset
of the source label space. The main purpose of PDA is to identify the shared
classes between the domains and promote learning transferable knowledge from
these classes. In this paper, we propose a multi-class adversarial architecture
for PDA. The proposed approach jointly aligns the marginal and
class-conditional distributions in the shared label space by minimaxing a novel
multi-class adversarial loss function. Furthermore, we incorporate effective
regularization terms to encourage selecting the most relevant subset of source
domain classes. In the absence of target labels, the proposed approach is able
to effectively learn domain-invariant feature representations, which in turn
can enhance the classification performance in the target domain. Comprehensive
experiments on three benchmark datasets Office-31, Office-Home, and
Caltech-Office corroborate the effectiveness of the proposed approach in
addressing different partial transfer learning tasks
Adversarial Domain Adaptation Being Aware of Class Relationships
Adversarial training is a useful approach to promote the learning of
transferable representations across the source and target domains, which has
been widely applied for domain adaptation (DA) tasks based on deep neural
networks. Until very recently, existing adversarial domain adaptation (ADA)
methods ignore the useful information from the label space, which is an
important factor accountable for the complicated data distributions associated
with different semantic classes. Especially, the inter-class semantic
relationships have been rarely considered and discussed in the current work of
transfer learning. In this paper, we propose a novel relationship-aware
adversarial domain adaptation (RADA) algorithm, which first utilizes a single
multi-class domain discriminator to enforce the learning of inter-class
dependency structure during domain-adversarial training and then aligns this
structure with the inter-class dependencies that are characterized from
training the label predictor on source domain. Specifically, we impose a
regularization term to penalize the structure discrepancy between the
inter-class dependencies respectively estimated from domain discriminator and
label predictor. Through this alignment, our proposed method makes the
adversarial domain adaptation aware of the class relationships. Empirical
studies show that the incorporation of class relationships significantly
improves the performance on benchmark datasets
Tackling Partial Domain Adaptation with Self-Supervision
Domain adaptation approaches have shown promising results in reducing the
marginal distribution difference among visual domains. They allow to train
reliable models that work over datasets of different nature (photos, paintings
etc), but they still struggle when the domains do not share an identical label
space. In the partial domain adaptation setting, where the target covers only a
subset of the source classes, it is challenging to reduce the domain gap
without incurring in negative transfer. Many solutions just keep the standard
domain adaptation techniques by adding heuristic sample weighting strategies.
In this work we show how the self-supervisory signal obtained from the spatial
co-location of patches can be used to define a side task that supports
adaptation regardless of the exact label sharing condition across domains. We
build over a recent work that introduced a jigsaw puzzle task for domain
generalization: we describe how to reformulate this approach for partial domain
adaptation and we show how it boosts existing adaptive solutions when combined
with them. The obtained experimental results on three datasets supports the
effectiveness of our approach
Trained Model Fusion for Object Detection using Gating Network
The major approaches of transfer learning in computer vision have tried to
adapt the source domain to the target domain one-to-one. However, this scenario
is difficult to apply to real applications such as video surveillance systems.
As those systems have many cameras installed at each location regarded as
source domains, it is difficult to identify the proper source domain. In this
paper, we introduce a new transfer learning scenario that has various source
domains and one target domain, assuming video surveillance system integration.
Also, we propose a novel method for automatically producing a high accuracy
model by fusing models trained at various source domains. In particular, we
show how to apply a gating network to fuse source domains for object detection
tasks, which is a new approach. We demonstrate the effectiveness of our method
through experiments on traffic surveillance datasets.Comment: Accepted to ACPR 201
Domain Alignment with Triplets
Deep domain adaptation methods can reduce the distribution discrepancy by
learning domain-invariant embedddings. However, these methods only focus on
aligning the whole data distributions, without considering the class-level
relations among source and target images. Thus, a target embeddings of a bird
might be aligned to source embeddings of an airplane. This semantic
misalignment can directly degrade the classifier performance on the target
dataset. To alleviate this problem, we present a similarity constrained
alignment (SCA) method for unsupervised domain adaptation. When aligning the
distributions in the embedding space, SCA enforces a similarity-preserving
constraint to maintain class-level relations among the source and target
images, i.e., if a source image and a target image are of the same class label,
their corresponding embeddings are supposed to be aligned nearby, and vise
versa. In the absence of target labels, we assign pseudo labels for target
images. Given labeled source images and pseudo-labeled target images, the
similarity-preserving constraint can be implemented by minimizing the triplet
loss. With the joint supervision of domain alignment loss and
similarity-preserving constraint, we train a network to obtain domain-invariant
embeddings with two critical characteristics, intra-class compactness and
inter-class separability. Extensive experiments conducted on the two datasets
well demonstrate the effectiveness of SCA.Comment: 10 pages;This version is not fully edited and will be updated soo
- …