14 research outputs found

    Is the Bellman residual a bad proxy?

    Get PDF
    This paper aims at theoretically and empirically comparing two standard optimization criteria for Reinforcement Learning: i) maximization of the mean value and ii) minimization of the Bellman residual. For that purpose, we place ourselves in the framework of policy search algorithms, that are usually designed to maximize the mean value, and derive a method that minimizes the residual ∥T∗vπ−vπ∥1,ν\|T_* v_\pi - v_\pi\|_{1,\nu} over policies. A theoretical analysis shows how good this proxy is to policy optimization, and notably that it is better than its value-based counterpart. We also propose experiments on randomly generated generic Markov decision processes, specifically designed for studying the influence of the involved concentrability coefficient. They show that the Bellman residual is generally a bad proxy to policy optimization and that directly maximizing the mean value is much better, despite the current lack of deep theoretical analysis. This might seem obvious, as directly addressing the problem of interest is usually better, but given the prevalence of (projected) Bellman residual minimization in value-based reinforcement learning, we believe that this question is worth to be considered.Comment: Final NIPS 2017 version (title, among other things, changed

    Optimal Raw Material Inventory Analysis Using Markov Decision Process with Policy Iteration Method

    Get PDF
    Inventory of raw materials is a big deal in every production process, both in company production and home business production. In order to meet consumer demand, a business must be able to determine the amount of inventory that should be provided. The purpose of this research is to choose an alternative selection of ordering raw materials that produce the maximum amount of raw materials with minimum costs. The raw material referred to in this study is pandan leaves used to make pandan mats. Analysis of raw material inventory used in this research was the Markov decision process with the policy iteration method by considering the discount factor. From the analysis conducted, it is obtained alternative policies that must be taken by producers to meet raw materials with minimum costs. The results of this study can be a consideration for business actors in the study location in deciding the optimal ordering policy that should be taken to obtain the minimum operational cost

    Momentum in Reinforcement Learning

    Get PDF
    We adapt the optimization's concept of momentum to reinforcement learning. Seeing the state-action value functions as an analog to the gradients in optimization, we interpret momentum as an average of consecutive qq-functions. We derive Momentum Value Iteration (MoVI), a variation of Value Iteration that incorporates this momentum idea. Our analysis shows that this allows MoVI to average errors over successive iterations. We show that the proposed approach can be readily extended to deep learning. Specifically, we propose a simple improvement on DQN based on MoVI, and experiment it on Atari games.Comment: AISTATS 202

    Multi-agent Reinforcement Learning in Sequential Social Dilemmas

    Get PDF
    Matrix games like Prisoner's Dilemma have guided research on social dilemmas for decades. However, they necessarily treat the choice to cooperate or defect as an atomic action. In real-world social dilemmas these choices are temporally extended. Cooperativeness is a property that applies to policies, not elementary actions. We introduce sequential social dilemmas that share the mixed incentive structure of matrix game social dilemmas but also require agents to learn policies that implement their strategic intentions. We analyze the dynamics of policies learned by multiple self-interested independent learning agents, each using its own deep Q-network, on two Markov games we introduce here: 1. a fruit Gathering game and 2. a Wolfpack hunting game. We characterize how learned behavior in each domain changes as a function of environmental factors including resource abundance. Our experiments show how conflict can emerge from competition over shared resources and shed light on how the sequential nature of real world social dilemmas affects cooperation

    On Reinforcement Learning for Turn-based Zero-sum Markov Games

    Full text link
    We consider the problem of finding Nash equilibrium for two-player turn-based zero-sum games. Inspired by the AlphaGo Zero (AGZ) algorithm, we develop a Reinforcement Learning based approach. Specifically, we propose Explore-Improve-Supervise (EIS) method that combines "exploration", "policy improvement"' and "supervised learning" to find the value function and policy associated with Nash equilibrium. We identify sufficient conditions for convergence and correctness for such an approach. For a concrete instance of EIS where random policy is used for "exploration", Monte-Carlo Tree Search is used for "policy improvement" and Nearest Neighbors is used for "supervised learning", we establish that this method finds an ε\varepsilon-approximate value function of Nash equilibrium in O~(ε−(d+4))\widetilde{O}(\varepsilon^{-(d+4)}) steps when the underlying state-space of the game is continuous and dd-dimensional. This is nearly optimal as we establish a lower bound of Ω~(ε−(d+2))\widetilde{\Omega}(\varepsilon^{-(d+2)}) for any policy

    Heuristic Search Value Iteration for zero-sum Stochastic Games

    Get PDF
    International audienceIn sequential decision-making, heuristic search algorithms allow exploiting both the initial situation and an admissible heuristic to efficiently search for an optimal solution, often for planning purposes. Such algorithms exist for problems with uncertain dynamics, partial observability, multiple criteria, or multiple collaborating agents. Here we look at two-player zero-sum stochastic games with discounted criterion, in a view to propose a solution tailored to the fully observable case, while solutions have been proposed for particular, though still more general, partially observable cases. This setting induces reasoning on both a lower and an upper bound of the value function, which leads us to proposing zsSG-HSVI, an algorithm based on Heuristic Search Value Iteration (HSVI), and which thus relies on generating trajectories. We demonstrate that, each player acting optimistically, and employing simple heuristic initializations, HSVI's convergence in finite time to an ϵ-optimal solution is preserved. An empirical study of the resulting approach is conducted on benchmark problems of various sizes
    corecore