446,715 research outputs found
Open Set Domain Adaptation by Backpropagation
Numerous algorithms have been proposed for transferring knowledge from a
label-rich domain (source) to a label-scarce domain (target). Almost all of
them are proposed for a closed-set scenario, where the source and the target
domain completely share the class of their samples. We call the shared class
the \doublequote{known class.} However, in practice, when samples in target
domain are not labeled, we cannot know whether the domains share the class. A
target domain can contain samples of classes that are not shared by the source
domain. We call such classes the \doublequote{unknown class} and algorithms
that work well in the open set situation are very practical. However, most
existing distribution matching methods for domain adaptation do not work well
in this setting because unknown target samples should not be aligned with the
source.
In this paper, we propose a method for an open set domain adaptation scenario
which utilizes adversarial training. A classifier is trained to make a boundary
between the source and the target samples whereas a generator is trained to
make target samples far from the boundary. Thus, we assign two options to the
feature generator: aligning them with source known samples or rejecting them as
unknown target samples. This approach allows extracting features that separate
unknown target samples from known target samples. Our method was extensively
evaluated in domain adaptation setting and outperformed other methods with a
large margin in most settings.Comment: Accepted by ECCV201
Open Set Domain Adaptation using Optimal Transport
We present a 2-step optimal transport approach that performs a mapping from a
source distribution to a target distribution. Here, the target has the
particularity to present new classes not present in the source domain. The
first step of the approach aims at rejecting the samples issued from these new
classes using an optimal transport plan. The second step solves the target
(class ratio) shift still as an optimal transport problem. We develop a dual
approach to solve the optimization problem involved at each step and we prove
that our results outperform recent state-of-the-art performances. We further
apply the approach to the setting where the source and target distributions
present both a label-shift and an increasing covariate (features) shift to show
its robustness.Comment: Accepted at ECML-PKDD 2020, Acknowledgements adde
Learning Factorized Representations for Open-set Domain Adaptation
Domain adaptation for visual recognition has undergone great progress in the
past few years. Nevertheless, most existing methods work in the so-called
closed-set scenario, assuming that the classes depicted by the target images
are exactly the same as those of the source domain. In this paper, we tackle
the more challenging, yet more realistic case of open-set domain adaptation,
where new, unknown classes can be present in the target data. While, in the
unsupervised scenario, one cannot expect to be able to identify each specific
new class, we aim to automatically detect which samples belong to these new
classes and discard them from the recognition process. To this end, we rely on
the intuition that the source and target samples depicting the known classes
can be generated by a shared subspace, whereas the target samples from unknown
classes come from a different, private subspace. We therefore introduce a
framework that factorizes the data into shared and private parts, while
encouraging the shared representation to be discriminative. Our experiments on
standard benchmarks evidence that our approach significantly outperforms the
state-of-the-art in open-set domain adaptation
Positive-unlabeled learning for open set domain adaptation
Open Set Domain Adaptation (OSDA) focuses on bridging the domain gap between a labeled source domain and an unlabeled target domain, while also rejecting target classes that are not present in the source as unknown. The challenges of this task are closely related to those of Positive-Unlabeled (PU) learning where it is essential to discriminate between positive (known) and negative (unknown) class samples in the unlabeled target data. With this newly discovered connection, we leverage the theoretical framework of PU learning for OSDA and, at the same time, we extend PU learning to tackle uneven data distributions. Our method combines domain adversarial learning with a new non-negative risk estimator for PU learning based on self-supervised sample reconstruction. With experiments on digit recognition and object classification, we validate our risk estimator and demonstrate that our approach allows reducing the domain gap without suffering from negative transfer
- …