33,212 research outputs found
Select, Label, and Mix: Learning Discriminative Invariant Feature Representations for Partial Domain Adaptation
Partial domain adaptation which assumes that the unknown target label space
is a subset of the source label space has attracted much attention in computer
vision. Despite recent progress, existing methods often suffer from three key
problems: negative transfer, lack of discriminability and domain invariance in
the latent space. To alleviate the above issues, we develop a novel 'Select,
Label, and Mix' (SLM) framework that aims to learn discriminative invariant
feature representations for partial domain adaptation. First, we present a
simple yet efficient "select" module that automatically filters out the outlier
source samples to avoid negative transfer while aligning distributions across
both domains. Second, the "label" module iteratively trains the classifier
using both the labeled source domain data and the generated pseudo-labels for
the target domain to enhance the discriminability of the latent space. Finally,
the "mix" module utilizes domain mixup regularization jointly with the other
two modules to explore more intrinsic structures across domains leading to a
domain-invariant latent space for partial domain adaptation. Extensive
experiments on several benchmark datasets demonstrate the superiority of our
proposed framework over state-of-the-art methods
On Fine-tuned Deep Features for Unsupervised Domain Adaptation
Prior feature transformation based approaches to Unsupervised Domain Adaptation (UDA) employ the deep features extracted by pre-trained deep models without fine-tuning them on the specific source or target domain data for a particular domain adaptation task. In contrast, end-to-end learning based approaches optimise the pre-trained backbones and the customised adaptation modules simultaneously to learn domaininvariant features for UDA. In this work, we explore the potential of combining fine-tuned features and feature transformation based UDA methods for improved domain adaptation performance. Specifically, we integrate the prevalent progressive pseudo-labelling techniques into the fine-tuning framework to extract fine-tuned features which are subsequently used in a state-of-the-art feature transformation based domain adaptation method SPL (Selective Pseudo-Labeling). Thorough experiments with multiple deep models including ResNet-50/101 and DeiTsmall/base are conducted to demonstrate the combination of finetuned features and SPL can achieve state-of-the-art performance on several benchmark datasets
Chapter 30 Turkey’s external differentiated integration with the EU in the field of migration governance
This chapter investigates and unravels the extent and drivers of Turkey’s external differentiated integration with the EU in the field of border management. While Turkey’s EU accession negotiations remain in a state of coma, there is a continuing need for policy convergence and alignment in areas of common interest such as migration governance. With a view to combat irregular migration, the EU has placed the export of its border management norms and rules at the center of its dialogue with Turkey. Thus, EU–Turkey relations in the field of border management represent an appealing case to study policy convergence between the EU and Turkey outside the accession scheme and examine the ever-evolving phenomena of external differentiated integration from both policy-centered and theory-directed angles. The chapter first conceptualizes external differentiated integration and introduces the five explanatory factors that have been recurrently used by the literature to explain the variance in (external) differentiation: politicization, extent of mutual interdependence, asymmetry of interdependence, incentives, and domestic conditions. It then critically assesses the effect of these prevailing drivers of differentiation on the three central issue areas concerning EU–Turkey dialogue on border regime: the implementation of the Integrated Border Management (IBM), Turkey’s operational cooperation with FRONTEX, and the March 2016 EU–Turkey Statement
Trust your Good Friends: Source-free Domain Adaptation by Reciprocal Neighborhood Clustering
Domain adaptation (DA) aims to alleviate the domain shift between source
domain and target domain. Most DA methods require access to the source data,
but often that is not possible (e.g. due to data privacy or intellectual
property). In this paper, we address the challenging source-free domain
adaptation (SFDA) problem, where the source pretrained model is adapted to the
target domain in the absence of source data. Our method is based on the
observation that target data, which might not align with the source domain
classifier, still forms clear clusters. We capture this intrinsic structure by
defining local affinity of the target data, and encourage label consistency
among data with high local affinity. We observe that higher affinity should be
assigned to reciprocal neighbors. To aggregate information with more context,
we consider expanded neighborhoods with small affinity values. Furthermore, we
consider the density around each target sample, which can alleviate the
negative impact of potential outliers. In the experimental results we verify
that the inherent structure of the target features is an important source of
information for domain adaptation. We demonstrate that this local structure can
be efficiently captured by considering the local neighbors, the reciprocal
neighbors, and the expanded neighborhood. Finally, we achieve state-of-the-art
performance on several 2D image and 3D point cloud recognition datasets.Comment: Accepted by IEEE TPAMI, extended version of conference paper
arXiv:2110.0420
A Survey on Negative Transfer
Transfer learning (TL) tries to utilize data or knowledge from one or more
source domains to facilitate the learning in a target domain. It is
particularly useful when the target domain has few or no labeled data, due to
annotation expense, privacy concerns, etc. Unfortunately, the effectiveness of
TL is not always guaranteed. Negative transfer (NT), i.e., the source domain
data/knowledge cause reduced learning performance in the target domain, has
been a long-standing and challenging problem in TL. Various approaches to
handle NT have been proposed in the literature. However, this filed lacks a
systematic survey on the formalization of NT, their factors and the algorithms
that handle NT. This paper proposes to fill this gap. First, the definition of
negative transfer is considered and a taxonomy of the factors are discussed.
Then, near fifty representative approaches for handling NT are categorized and
reviewed, from four perspectives: secure transfer, domain similarity
estimation, distant transfer and negative transfer mitigation. NT in related
fields, e.g., multi-task learning, lifelong learning, and adversarial attacks
are also discussed
HoMM: Higher-order Moment Matching for Unsupervised Domain Adaptation
Minimizing the discrepancy of feature distributions between different domains
is one of the most promising directions in unsupervised domain adaptation. From
the perspective of distribution matching, most existing discrepancy-based
methods are designed to match the second-order or lower statistics, which
however, have limited expression of statistical characteristic for non-Gaussian
distributions. In this work, we explore the benefits of using higher-order
statistics (mainly refer to third-order and fourth-order statistics) for domain
matching. We propose a Higher-order Moment Matching (HoMM) method, and further
extend the HoMM into reproducing kernel Hilbert spaces (RKHS). In particular,
our proposed HoMM can perform arbitrary-order moment tensor matching, we show
that the first-order HoMM is equivalent to Maximum Mean Discrepancy (MMD) and
the second-order HoMM is equivalent to Correlation Alignment (CORAL). Moreover,
the third-order and the fourth-order moment tensor matching are expected to
perform comprehensive domain alignment as higher-order statistics can
approximate more complex, non-Gaussian distributions. Besides, we also exploit
the pseudo-labeled target samples to learn discriminative representations in
the target domain, which further improves the transfer performance. Extensive
experiments are conducted, showing that our proposed HoMM consistently
outperforms the existing moment matching methods by a large margin. Codes are
available at \url{https://github.com/chenchao666/HoMM-Master}Comment: Accept by AAAI-2020, codes are available at
https://github.com/chenchao666/HoMM-Maste
- …