2,581 research outputs found
Multiscale Markov Decision Problems: Compression, Solution, and Transfer Learning
Many problems in sequential decision making and stochastic control often have
natural multiscale structure: sub-tasks are assembled together to accomplish
complex goals. Systematically inferring and leveraging hierarchical structure,
particularly beyond a single level of abstraction, has remained a longstanding
challenge. We describe a fast multiscale procedure for repeatedly compressing,
or homogenizing, Markov decision processes (MDPs), wherein a hierarchy of
sub-problems at different scales is automatically determined. Coarsened MDPs
are themselves independent, deterministic MDPs, and may be solved using
existing algorithms. The multiscale representation delivered by this procedure
decouples sub-tasks from each other and can lead to substantial improvements in
convergence rates both locally within sub-problems and globally across
sub-problems, yielding significant computational savings. A second fundamental
aspect of this work is that these multiscale decompositions yield new transfer
opportunities across different problems, where solutions of sub-tasks at
different levels of the hierarchy may be amenable to transfer to new problems.
Localized transfer of policies and potential operators at arbitrary scales is
emphasized. Finally, we demonstrate compression and transfer in a collection of
illustrative domains, including examples involving discrete and continuous
statespaces.Comment: 86 pages, 15 figure
Sequential Transfer in Reinforcement Learning with a Generative Model
We are interested in how to design reinforcement learning agents that
provably reduce the sample complexity for learning new tasks by transferring
knowledge from previously-solved ones. The availability of solutions to related
problems poses a fundamental trade-off: whether to seek policies that are
expected to achieve high (yet sub-optimal) performance in the new task
immediately or whether to seek information to quickly identify an optimal
solution, potentially at the cost of poor initial behavior. In this work, we
focus on the second objective when the agent has access to a generative model
of state-action pairs. First, given a set of solved tasks containing an
approximation of the target one, we design an algorithm that quickly identifies
an accurate solution by seeking the state-action pairs that are most
informative for this purpose. We derive PAC bounds on its sample complexity
which clearly demonstrate the benefits of using this kind of prior knowledge.
Then, we show how to learn these approximate tasks sequentially by reducing our
transfer setting to a hidden Markov model and employing spectral methods to
recover its parameters. Finally, we empirically verify our theoretical findings
in simple simulated domains.Comment: ICML 202
Transfer from Multiple MDPs
Transfer reinforcement learning (RL) methods leverage on the experience
collected on a set of source tasks to speed-up RL algorithms. A simple and
effective approach is to transfer samples from source tasks and include them
into the training set used to solve a given target task. In this paper, we
investigate the theoretical properties of this transfer method and we introduce
novel algorithms adapting the transfer process on the basis of the similarity
between source and target tasks. Finally, we report illustrative experimental
results in a continuous chain problem.Comment: 201
Transfer Value Iteration Networks
Value iteration networks (VINs) have been demonstrated to have a good
generalization ability for reinforcement learning tasks across similar domains.
However, based on our experiments, a policy learned by VINs still fail to
generalize well on the domain whose action space and feature space are not
identical to those in the domain where it is trained. In this paper, we propose
a transfer learning approach on top of VINs, termed Transfer VINs (TVINs), such
that a learned policy from a source domain can be generalized to a target
domain with only limited training data, even if the source domain and the
target domain have domain-specific actions and features. We empirically verify
that our proposed TVINs outperform VINs when the source and the target domains
have similar but not identical action and feature spaces. Furthermore, we show
that the performance improvement is consistent across different environments,
maze sizes, dataset sizes as well as different values of hyperparameters such
as number of iteration and kernel size
- …