1,259 research outputs found
Self-Optimizing and Pareto-Optimal Policies in General Environments based on Bayes-Mixtures
The problem of making sequential decisions in unknown probabilistic
environments is studied. In cycle action results in perception
and reward , where all quantities in general may depend on the complete
history. The perception and reward are sampled from the (reactive)
environmental probability distribution . This very general setting
includes, but is not limited to, (partial observable, k-th order) Markov
decision processes. Sequential decision theory tells us how to act in order to
maximize the total expected reward, called value, if is known.
Reinforcement learning is usually used if is unknown. In the Bayesian
approach one defines a mixture distribution as a weighted sum of
distributions \nu\in\M, where \M is any class of distributions including
the true environment . We show that the Bayes-optimal policy based
on the mixture is self-optimizing in the sense that the average value
converges asymptotically for all \mu\in\M to the optimal value achieved by
the (infeasible) Bayes-optimal policy which knows in advance. We
show that the necessary condition that \M admits self-optimizing policies at
all, is also sufficient. No other structural assumptions are made on \M. As
an example application, we discuss ergodic Markov decision processes, which
allow for self-optimizing policies. Furthermore, we show that is
Pareto-optimal in the sense that there is no other policy yielding higher or
equal value in {\em all} environments \nu\in\M and a strictly higher value in
at least one.Comment: 15 page
Optimistic Agents are Asymptotically Optimal
We use optimism to introduce generic asymptotically optimal reinforcement
learning agents. They achieve, with an arbitrary finite or compact class of
environments, asymptotically optimal behavior. Furthermore, in the finite
deterministic case we provide finite error bounds.Comment: 13 LaTeX page
Planning with Information-Processing Constraints and Model Uncertainty in Markov Decision Processes
Information-theoretic principles for learning and acting have been proposed
to solve particular classes of Markov Decision Problems. Mathematically, such
approaches are governed by a variational free energy principle and allow
solving MDP planning problems with information-processing constraints expressed
in terms of a Kullback-Leibler divergence with respect to a reference
distribution. Here we consider a generalization of such MDP planners by taking
model uncertainty into account. As model uncertainty can also be formalized as
an information-processing constraint, we can derive a unified solution from a
single generalized variational principle. We provide a generalized value
iteration scheme together with a convergence proof. As limit cases, this
generalized scheme includes standard value iteration with a known model,
Bayesian MDP planning, and robust planning. We demonstrate the benefits of this
approach in a grid world simulation.Comment: 16 pages, 3 figure
- …