The problem of making sequential decisions in unknown probabilistic
environments is studied. In cycle t action yt results in perception xt
and reward rt, where all quantities in general may depend on the complete
history. The perception xt and reward rt are sampled from the (reactive)
environmental probability distribution μ. This very general setting
includes, but is not limited to, (partial observable, k-th order) Markov
decision processes. Sequential decision theory tells us how to act in order to
maximize the total expected reward, called value, if μ is known.
Reinforcement learning is usually used if μ is unknown. In the Bayesian
approach one defines a mixture distribution ξ as a weighted sum of
distributions \nu\in\M, where \M is any class of distributions including
the true environment μ. We show that the Bayes-optimal policy pξ based
on the mixture ξ is self-optimizing in the sense that the average value
converges asymptotically for all \mu\in\M to the optimal value achieved by
the (infeasible) Bayes-optimal policy pμ which knows μ in advance. We
show that the necessary condition that \M admits self-optimizing policies at
all, is also sufficient. No other structural assumptions are made on \M. As
an example application, we discuss ergodic Markov decision processes, which
allow for self-optimizing policies. Furthermore, we show that pξ is
Pareto-optimal in the sense that there is no other policy yielding higher or
equal value in {\em all} environments \nu\in\M and a strictly higher value in
at least one.Comment: 15 page