6,845 research outputs found
Bayesian multitask inverse reinforcement learning
We generalise the problem of inverse reinforcement learning to multiple
tasks, from multiple demonstrations. Each one may represent one expert trying
to solve a different task, or as different experts trying to solve the same
task. Our main contribution is to formalise the problem as statistical
preference elicitation, via a number of structured priors, whose form captures
our biases about the relatedness of different tasks or expert policies. In
doing so, we introduce a prior on policy optimality, which is more natural to
specify. We show that our framework allows us not only to learn to efficiently
from multiple experts but to also effectively differentiate between the goals
of each. Possible applications include analysing the intrinsic motivations of
subjects in behavioural experiments and learning from multiple teachers.Comment: Corrected version. 13 pages, 8 figure
Stochastic Inverse Reinforcement Learning
The goal of the inverse reinforcement learning (IRL) problem is to recover
the reward functions from expert demonstrations. However, the IRL problem like
any ill-posed inverse problem suffers the congenital defect that the policy may
be optimal for many reward functions, and expert demonstrations may be optimal
for many policies. In this work, we generalize the IRL problem to a well-posed
expectation optimization problem stochastic inverse reinforcement learning
(SIRL) to recover the probability distribution over reward functions. We adopt
the Monte Carlo expectation-maximization (MCEM) method to estimate the
parameter of the probability distribution as the first solution to the SIRL
problem. The solution is succinct, robust, and transferable for a learning task
and can generate alternative solutions to the IRL problem. Through our
formulation, it is possible to observe the intrinsic property for the IRL
problem from a global viewpoint, and our approach achieves a considerable
performance on the objectworld.Comment: 8+2 pages, 5 figures, Under Revie
Bayesian Reinforcement Learning via Deep, Sparse Sampling
We address the problem of Bayesian reinforcement learning using efficient
model-based online planning. We propose an optimism-free Bayes-adaptive
algorithm to induce deeper and sparser exploration with a theoretical bound on
its performance relative to the Bayes optimal policy, with a lower
computational complexity. The main novelty is the use of a candidate policy
generator, to generate long-term options in the planning tree (over beliefs),
which allows us to create much sparser and deeper trees. Experimental results
on different environments show that in comparison to the state-of-the-art, our
algorithm is both computationally more efficient, and obtains significantly
higher reward in discrete environments.Comment: Published in AISTATS 202
- …