30,593 research outputs found
Mapping dynamic environments using Markov random field models
This paper focuses on dynamic environments for mobile robots and proposes a new mapping method combining hidden Markov models (HMMs) and Markov random fields (MRFs). Grid cells are used to represent the dynamic environment. The state change of every grid cell is modelled by an HMM with an unknown transition matrix. MRFs are applied to consider the dependence between different transition matrices. The unknown parameters are learnt from not only the corresponding observations but also its neighbours. Given the dependence, parameter maps are smooth. Expectation maximization (EM) is applied to obtain the best parameters from observations. Finally, a simulation is done to evaluate the proposed method
Probabilistic inverse reinforcement learning in unknown environments
We consider the problem of learning by demonstration from agents acting in
unknown stochastic Markov environments or games. Our aim is to estimate agent
preferences in order to construct improved policies for the same task that the
agents are trying to solve. To do so, we extend previous probabilistic
approaches for inverse reinforcement learning in known MDPs to the case of
unknown dynamics or opponents. We do this by deriving two simplified
probabilistic models of the demonstrator's policy and utility. For
tractability, we use maximum a posteriori estimation rather than full Bayesian
inference. Under a flat prior, this results in a convex optimisation problem.
We find that the resulting algorithms are highly competitive against a variety
of other methods for inverse reinforcement learning that do have knowledge of
the dynamics.Comment: Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013
- …