1,102,994 research outputs found
Partial Transfer Learning with Selective Adversarial Networks
Adversarial learning has been successfully embedded into deep networks to
learn transferable features, which reduce distribution discrepancy between the
source and target domains. Existing domain adversarial networks assume fully
shared label space across domains. In the presence of big data, there is strong
motivation of transferring both classification and representation models from
existing big domains to unknown small domains. This paper introduces partial
transfer learning, which relaxes the shared label space assumption to that the
target label space is only a subspace of the source label space. Previous
methods typically match the whole source domain to the target domain, which are
prone to negative transfer for the partial transfer problem. We present
Selective Adversarial Network (SAN), which simultaneously circumvents negative
transfer by selecting out the outlier source classes and promotes positive
transfer by maximally matching the data distributions in the shared label
space. Experiments demonstrate that our models exceed state-of-the-art results
for partial transfer learning tasks on several benchmark datasets
Lautum Regularization for Semi-supervised Transfer Learning
Transfer learning is a very important tool in deep learning as it allows
propagating information from one "source dataset" to another "target dataset",
especially in the case of a small number of training examples in the latter.
Yet, discrepancies between the underlying distributions of the source and
target data are commonplace and are known to have a substantial impact on
algorithm performance. In this work we suggest a novel information theoretic
approach for the analysis of the performance of deep neural networks in the
context of transfer learning. We focus on the task of semi-supervised transfer
learning, in which unlabeled samples from the target dataset are available
during the network training on the source dataset. Our theory suggests that one
may improve the transferability of a deep neural network by imposing a Lautum
information based regularization that relates the network weights to the target
data. We demonstrate the effectiveness of the proposed approach in various
transfer learning experiments
Efficient Deep Reinforcement Learning via Adaptive Policy Transfer
Transfer Learning (TL) has shown great potential to accelerate Reinforcement
Learning (RL) by leveraging prior knowledge from past learned policies of
relevant tasks. Existing transfer approaches either explicitly computes the
similarity between tasks or select appropriate source policies to provide
guided explorations for the target task. However, how to directly optimize the
target policy by alternatively utilizing knowledge from appropriate source
policies without explicitly measuring the similarity is currently missing. In
this paper, we propose a novel Policy Transfer Framework (PTF) to accelerate RL
by taking advantage of this idea. Our framework learns when and which source
policy is the best to reuse for the target policy and when to terminate it by
modeling multi-policy transfer as the option learning problem. PTF can be
easily combined with existing deep RL approaches. Experimental results show it
significantly accelerates the learning process and surpasses state-of-the-art
policy transfer methods in terms of learning efficiency and final performance
in both discrete and continuous action spaces.Comment: Accepted by IJCAI'202
Constrained Deep Transfer Feature Learning and its Applications
Feature learning with deep models has achieved impressive results for both
data representation and classification for various vision tasks. Deep feature
learning, however, typically requires a large amount of training data, which
may not be feasible for some application domains. Transfer learning can be one
of the approaches to alleviate this problem by transferring data from data-rich
source domain to data-scarce target domain. Existing transfer learning methods
typically perform one-shot transfer learning and often ignore the specific
properties that the transferred data must satisfy. To address these issues, we
introduce a constrained deep transfer feature learning method to perform
simultaneous transfer learning and feature learning by performing transfer
learning in a progressively improving feature space iteratively in order to
better narrow the gap between the target domain and the source domain for
effective transfer of the data from the source domain to target domain.
Furthermore, we propose to exploit the target domain knowledge and incorporate
such prior knowledge as a constraint during transfer learning to ensure that
the transferred data satisfies certain properties of the target domain. To
demonstrate the effectiveness of the proposed constrained deep transfer feature
learning method, we apply it to thermal feature learning for eye detection by
transferring from the visible domain. We also applied the proposed method for
cross-view facial expression recognition as a second application. The
experimental results demonstrate the effectiveness of the proposed method for
both applications.Comment: International Conference on Computer Vision and Pattern Recognition,
201
- …
