332,886 research outputs found
Instance-based Deep Transfer Learning
Deep transfer learning recently has acquired significant research interest.
It makes use of pre-trained models that are learned from a source domain, and
utilizes these models for the tasks in a target domain. Model-based deep
transfer learning is probably the most frequently used method. However, very
little research work has been devoted to enhancing deep transfer learning by
focusing on the influence of data. In this paper, we propose an instance-based
approach to improve deep transfer learning in a target domain. Specifically, we
choose a pre-trained model from a source domain and apply this model to
estimate the influence of training samples in a target domain. Then we optimize
the training data of the target domain by removing the training samples that
will lower the performance of the pre-trained model. We later either fine-tune
the pre-trained model with the optimized training data in the target domain, or
build a new model which is initialized partially based on the pre-trained
model, and fine-tune it with the optimized training data in the target domain.
Using this approach, transfer learning can help deep learning models to capture
more useful features. Extensive experiments demonstrate the effectiveness of
our approach on boosting the quality of deep learning models for some common
computer vision tasks, such as image classification.Comment: Accepted to WACV 2019. This is a preprint versio
Grounding Language for Transfer in Deep Reinforcement Learning
In this paper, we explore the utilization of natural language to drive
transfer for reinforcement learning (RL). Despite the wide-spread application
of deep RL techniques, learning generalized policy representations that work
across domains remains a challenging problem. We demonstrate that textual
descriptions of environments provide a compact intermediate channel to
facilitate effective policy transfer. Specifically, by learning to ground the
meaning of text to the dynamics of the environment such as transitions and
rewards, an autonomous agent can effectively bootstrap policy learning on a new
domain given its description. We employ a model-based RL approach consisting of
a differentiable planning module, a model-free component and a factorized state
representation to effectively use entity descriptions. Our model outperforms
prior work on both transfer and multi-task scenarios in a variety of different
environments. For instance, we achieve up to 14% and 11.5% absolute improvement
over previously existing models in terms of average and initial rewards,
respectively.Comment: JAIR 201
- …