49 research outputs found

    Zap Q-Learning for Optimal Stopping Time Problems

    Full text link
    The objective in this paper is to obtain fast converging reinforcement learning algorithms to approximate solutions to the problem of discounted cost optimal stopping in an irreducible, uniformly ergodic Markov chain, evolving on a compact subset of Rn\mathbb{R}^n. We build on the dynamic programming approach taken by Tsitsikilis and Van Roy, wherein they propose a Q-learning algorithm to estimate the optimal state-action value function, which then defines an optimal stopping rule. We provide insights as to why the convergence rate of this algorithm can be slow, and propose a fast-converging alternative, the "Zap-Q-learning" algorithm, designed to achieve optimal rate of convergence. For the first time, we prove the convergence of the Zap-Q-learning algorithm under the assumption of linear function approximation setting. We use ODE analysis for the proof, and the optimal asymptotic variance property of the algorithm is reflected via fast convergence in a finance example

    Recent Advances in Reinforcement Learning

    Get PDF
    In this paper, we give a brief review of Markov Decision Processes (MDPs), and how Reinforcement Learning (RL) can be viewed as MDP where the parameters are unknown. Specific topics discussed include the Bellman equation and the Bellman operator, and value and policy iterations for MDPs, together with recent "empirical" approaches to solving the Bellman equation and applying the Bellman iteration. In addition to the well-established method of Q-learning, we also discuss the more recent approach known as Zap Q-learning. © 2020 AACC

    Stability of Q-Learning Through Design and Optimism

    Full text link
    Q-learning has become an important part of the reinforcement learning toolkit since its introduction in the dissertation of Chris Watkins in the 1980s. The purpose of this paper is in part a tutorial on stochastic approximation and Q-learning, providing details regarding the INFORMS APS inaugural Applied Probability Trust Plenary Lecture, presented in Nancy France, June 2023. The paper also presents new approaches to ensure stability and potentially accelerated convergence for these algorithms, and stochastic approximation in other settings. Two contributions are entirely new: 1. Stability of Q-learning with linear function approximation has been an open topic for research for over three decades. It is shown that with appropriate optimistic training in the form of a modified Gibbs policy, there exists a solution to the projected Bellman equation, and the algorithm is stable (in terms of bounded parameter estimates). Convergence remains one of many open topics for research. 2. The new Zap Zero algorithm is designed to approximate the Newton-Raphson flow without matrix inversion. It is stable and convergent under mild assumptions on the mean flow vector field for the algorithm, and compatible statistical assumption on an underlying Markov chain. The algorithm is a general approach to stochastic approximation which in particular applies to Q-learning with "oblivious" training even with non-linear function approximation.Comment: Companion paper to the INFORMS APS inaugural Applied Probability Trust Plenary Lecture, presented in Nancy France, June 2023. Slides available online, Online, DOI 10.13140/RG.2.2.24897.3312
    corecore