5,888 research outputs found

    A new reinforcement learning algorithm with fixed exploration for semi-Markov decision processes

    Get PDF
    Artificial intelligence or machine learning techniques are currently being widely applied for solving problems within the field of data analytics. This work presents and demonstrates the use of a new machine learning algorithm for solving semi-Markov decision processes (SMDPs). SMDPs are encountered in the domain of Reinforcement Learning to solve control problems in discrete-event systems. The new algorithm developed here is called iSMART, an acronym for imaging Semi-Markov Average Reward Technique. The algorithm uses a constant exploration rate, unlike its precursor R-SMART, which required exploration decay. The major difference between R-SMART and iSMART is that the latter uses, in addition to the regular iterates of R-SMART, a set of so-called imaging iterates, which form an image of the regular iterates and allow iSMART to avoid exploration decay. The new algorithm is tested extensively on small-scale SMDPs and on large-scale problems from the domain of Total Productive Maintenance (TPM). The algorithm shows encouraging performance on all the cases studied --Abstract, page iii

    Self-Optimizing and Pareto-Optimal Policies in General Environments based on Bayes-Mixtures

    Full text link
    The problem of making sequential decisions in unknown probabilistic environments is studied. In cycle tt action yty_t results in perception xtx_t and reward rtr_t, where all quantities in general may depend on the complete history. The perception xtx_t and reward rtr_t are sampled from the (reactive) environmental probability distribution μ\mu. This very general setting includes, but is not limited to, (partial observable, k-th order) Markov decision processes. Sequential decision theory tells us how to act in order to maximize the total expected reward, called value, if μ\mu is known. Reinforcement learning is usually used if μ\mu is unknown. In the Bayesian approach one defines a mixture distribution ξ\xi as a weighted sum of distributions \nu\in\M, where \M is any class of distributions including the true environment μ\mu. We show that the Bayes-optimal policy pξp^\xi based on the mixture ξ\xi is self-optimizing in the sense that the average value converges asymptotically for all \mu\in\M to the optimal value achieved by the (infeasible) Bayes-optimal policy pμp^\mu which knows μ\mu in advance. We show that the necessary condition that \M admits self-optimizing policies at all, is also sufficient. No other structural assumptions are made on \M. As an example application, we discuss ergodic Markov decision processes, which allow for self-optimizing policies. Furthermore, we show that pξp^\xi is Pareto-optimal in the sense that there is no other policy yielding higher or equal value in {\em all} environments \nu\in\M and a strictly higher value in at least one.Comment: 15 page
    • …
    corecore