5 research outputs found
Bayesian learning of noisy Markov decision processes
We consider the inverse reinforcement learning problem, that is, the problem
of learning from, and then predicting or mimicking a controller based on
state/action data. We propose a statistical model for such data, derived from
the structure of a Markov decision process. Adopting a Bayesian approach to
inference, we show how latent variables of the model can be estimated, and how
predictions about actions can be made, in a unified framework. A new Markov
chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior
distribution. This step includes a parameter expansion step, which is shown to
be essential for good convergence properties of the MCMC sampler. As an
illustration, the method is applied to learning a human controller
Trace-class Gaussian priors for Bayesian learning of neural networks with MCMC
This paper introduces a new neural network based prior for real valued
functions on which, by construction, is more easily and cheaply
scaled up in the domain dimension compared to the usual Karhunen-Lo\`eve
function space prior. The new prior is a Gaussian neural network prior, where
each weight and bias has an independent Gaussian prior, but with the key
difference that the variances decrease in the width of the network in such a
way that the resulting function is almost surely well defined in the limit of
an infinite width network. We show that in a Bayesian treatment of inferring
unknown functions, the induced posterior over functions is amenable to Monte
Carlo sampling using Hilbert space Markov chain Monte Carlo (MCMC) methods.
This type of MCMC is popular, e.g. in the Bayesian Inverse Problems literature,
because it is stable under mesh refinement, i.e. the acceptance probability
does not shrink to as more parameters of the function's prior are
introduced, even ad infinitum. In numerical examples we demonstrate these
stated competitive advantages over other function space priors. We also
implement examples in Bayesian Reinforcement Learning to automate tasks from
data and demonstrate, for the first time, stability of MCMC to mesh refinement
for these type of problems.Comment: 24 pages, 21 figure
Bayesian learning of noisy Markov decision processes
This work addresses the problem of estimating the optimal value function in a Markov Decision Process from observed state-action pairs. We adopt a Bayesian approach to inference, which allows both the model to be estimated and predictions about actions to be made in a unified framework, providing a principled approach to mimicry of a controller on the basis of observed data. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from theposterior distribution over the optimal value function. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller
Bayesian Learning of Noisy Markov Decision Processes
We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller
Bayesian Learning of Noisy Markov Decision Processes
This work addresses the problem of estimating the optimal value function in a MarkovDecision Process from observed state-action pairs. We adopt a Bayesian approach toinference, which allows both the model to be estimated and predictions about actions tobe made in a unified framework, providing a principled approach to mimicry of a controlleron the basis of observed data. A new Markov chain Monte Carlo (MCMC) sampler isdevised for simulation from the posterior distribution over the optimal value function.This step includes a parameter expansion step, which is shown to be essential for goodconvergence properties of the MCMC sampler. As an illustration, the method is appliedto learning a human controller.