1,259 research outputs found

    Self-Optimizing and Pareto-Optimal Policies in General Environments based on Bayes-Mixtures

    Full text link
    The problem of making sequential decisions in unknown probabilistic environments is studied. In cycle tt action yty_t results in perception xtx_t and reward rtr_t, where all quantities in general may depend on the complete history. The perception xtx_t and reward rtr_t are sampled from the (reactive) environmental probability distribution μ\mu. This very general setting includes, but is not limited to, (partial observable, k-th order) Markov decision processes. Sequential decision theory tells us how to act in order to maximize the total expected reward, called value, if μ\mu is known. Reinforcement learning is usually used if μ\mu is unknown. In the Bayesian approach one defines a mixture distribution ξ\xi as a weighted sum of distributions \nu\in\M, where \M is any class of distributions including the true environment μ\mu. We show that the Bayes-optimal policy pξp^\xi based on the mixture ξ\xi is self-optimizing in the sense that the average value converges asymptotically for all \mu\in\M to the optimal value achieved by the (infeasible) Bayes-optimal policy pμp^\mu which knows μ\mu in advance. We show that the necessary condition that \M admits self-optimizing policies at all, is also sufficient. No other structural assumptions are made on \M. As an example application, we discuss ergodic Markov decision processes, which allow for self-optimizing policies. Furthermore, we show that pξp^\xi is Pareto-optimal in the sense that there is no other policy yielding higher or equal value in {\em all} environments \nu\in\M and a strictly higher value in at least one.Comment: 15 page

    Optimistic Agents are Asymptotically Optimal

    Full text link
    We use optimism to introduce generic asymptotically optimal reinforcement learning agents. They achieve, with an arbitrary finite or compact class of environments, asymptotically optimal behavior. Furthermore, in the finite deterministic case we provide finite error bounds.Comment: 13 LaTeX page

    Planning with Information-Processing Constraints and Model Uncertainty in Markov Decision Processes

    Full text link
    Information-theoretic principles for learning and acting have been proposed to solve particular classes of Markov Decision Problems. Mathematically, such approaches are governed by a variational free energy principle and allow solving MDP planning problems with information-processing constraints expressed in terms of a Kullback-Leibler divergence with respect to a reference distribution. Here we consider a generalization of such MDP planners by taking model uncertainty into account. As model uncertainty can also be formalized as an information-processing constraint, we can derive a unified solution from a single generalized variational principle. We provide a generalized value iteration scheme together with a convergence proof. As limit cases, this generalized scheme includes standard value iteration with a known model, Bayesian MDP planning, and robust planning. We demonstrate the benefits of this approach in a grid world simulation.Comment: 16 pages, 3 figure
    • …
    corecore