5,148 research outputs found
A Meta-Reinforcement Learning Approach to Optimize Parameters and Hyper-parameters Simultaneously.
In the last few years, we have witnessed a resurgence of interest in neural networks. The state-of-the-art deep neural network architectures are however challenging to design from scratch and requiring computationally costly empirical evaluations. Hence, there has been a lot of research effort dedicated to effective utilisation and adaptation of previously proposed architectures either by using transfer learning or by modifying the original architecture. The ultimate goal of designing a network architecture is to achieve the best possible accuracy for a given task or group of related tasks. Although there have been some efforts to automate network architecture design process, most of the existing solutions are still very computationally intensive. This work presents a framework to automatically find a good set of hyper-parameters resulting in reasonably good accuracy, which at the same time is less computationally expensive than the existing approaches. The idea presented here is to frame the hyper-parameter selection and tuning within the reinforcement learning regime. Thus, the parameters of a meta-learner, RNN, and hyper-parameters of the target network are tuned simultaneously. Our meta-learner is being updated using policy network and simultaneously generates a tuple of hyper-parameters which are utilized by another network. The network is trained on a given task for a number of steps and produces validation accuracy whose delta is used as reward. The reward along with the state of the network, comprising statistics of network’s final layer outcome and training loss, are fed back to the meta-learner which in turn generates a tuned tuple of hyper-parameters for the next time step. Therefore, the effectiveness of a recommended tuple can be tested very quickly rather than waiting for the network to converge. This approach produces accuracy close to the state-of-the-art approach and is found to be comparatively less computationally intensive
Gradient-free Policy Architecture Search and Adaptation
We develop a method for policy architecture search and adaptation via
gradient-free optimization which can learn to perform autonomous driving tasks.
By learning from both demonstration and environmental reward we develop a model
that can learn with relatively few early catastrophic failures. We first learn
an architecture of appropriate complexity to perceive aspects of world state
relevant to the expert demonstration, and then mitigate the effect of
domain-shift during deployment by adapting a policy demonstrated in a source
domain to rewards obtained in a target environment. We show that our approach
allows safer learning than baseline methods, offering a reduced cumulative
crash metric over the agent's lifetime as it learns to drive in a realistic
simulated environment.Comment: Accepted in Conference on Robot Learning, 201
Meta Reinforcement Learning with Latent Variable Gaussian Processes
Learning from small data sets is critical in many practical applications
where data collection is time consuming or expensive, e.g., robotics, animal
experiments or drug design. Meta learning is one way to increase the data
efficiency of learning algorithms by generalizing learned concepts from a set
of training tasks to unseen, but related, tasks. Often, this relationship
between tasks is hard coded or relies in some other way on human expertise. In
this paper, we frame meta learning as a hierarchical latent variable model and
infer the relationship between tasks automatically from data. We apply our
framework in a model-based reinforcement learning setting and show that our
meta-learning model effectively generalizes to novel tasks by identifying how
new tasks relate to prior ones from minimal data. This results in up to a 60%
reduction in the average interaction time needed to solve tasks compared to
strong baselines.Comment: 11 pages, 7 figure
- …