2 research outputs found
Faster Reinforcement Learning Using Active Simulators
In this work, we propose several online methods to build a \emph{learning
curriculum} from a given set of target-task-specific training tasks in order to
speed up reinforcement learning (RL). These methods can decrease the total
training time needed by an RL agent compared to training on the target task
from scratch. Unlike traditional transfer learning, we consider creating a
sequence from several training tasks in order to provide the most benefit in
terms of reducing the total time to train.
Our methods utilize the learning trajectory of the agent on the curriculum
tasks seen so far to decide which tasks to train on next. An attractive feature
of our methods is that they are weakly coupled to the choice of the RL
algorithm as well as the transfer learning method. Further, when there is
domain information available, our methods can incorporate such knowledge to
further speed up the learning. We experimentally show that these methods can be
used to obtain suitable learning curricula that speed up the overall training
time on two different domains.Comment: 12 pages and 4 figures More experiments added to the previous versio
Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey
Reinforcement learning (RL) is a popular paradigm for addressing sequential
decision tasks in which the agent has only limited environmental feedback.
Despite many advances over the past three decades, learning in many domains
still requires a large amount of interaction with the environment, which can be
prohibitively expensive in realistic scenarios. To address this problem,
transfer learning has been applied to reinforcement learning such that
experience gained in one task can be leveraged when starting to learn the next,
harder task. More recently, several lines of research have explored how tasks,
or data samples themselves, can be sequenced into a curriculum for the purpose
of learning a problem that may otherwise be too difficult to learn from
scratch. In this article, we present a framework for curriculum learning (CL)
in reinforcement learning, and use it to survey and classify existing CL
methods in terms of their assumptions, capabilities, and goals. Finally, we use
our framework to find open problems and suggest directions for future RL
curriculum learning research