8 research outputs found

    Pricing Bermudan options using regression: Optimal rates of convergence for lower estimates

    Get PDF
    The problem of pricing Bermudan options using simulations and nonparametric regression is considered. We derive optimal non-asymptotic bounds for the low biased estimate based on a suboptimal stopping rule constructed from some estimates of the optimal continuation values. These estimates may be of different nature, they may be local or global, with the only requirement being that the deviations of these estimates from the true continuation values can be uniformly bounded in probability. As an illustration, we discuss a class of local polynomial estimates which, under some regularity conditions, yield continuation values estimates possessing the required property

    Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming

    Get PDF
    We consider the classical nite-state discounted Markovian decision problem, and we introduce a new policy iteration-like algorithm for fi nding the optimal Q-factors. Instead of policy evaluation by solving a linear system of equations, our algorithm requires (possibly inexact) solution of a nonlinear system of equations, involving estimates of state costs as well as Q-factors. This is Bellman's equation for an optimal stopping problem that can be solved with simple Q-learning iterations, in the case where a lookup table representation is used; it can also be solved with the Q-learning algorithm of Tsitsiklis and Van Roy [TsV99], in the case where feature-based Q-factor approximations are used. In exact/lookup table representation form, our algorithm admits asynchronous and stochastic iterative implementations, in the spirit of asynchronous/modi ed policy iteration, with lower overhead and more reliable convergence advantages over existing Q-learning schemes. Furthermore, for large-scale problems, where linear basis function approximations and simulation-based temporal di erence implementations are used, our algorithm resolves e ffectively the inherent difficulties of existing schemes due to inadequate exploration

    Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming (Revised)

    Get PDF
    The revised technical report C-2010-10We consider the classical finite-state discounted Markovian decision problem, and we introduce a new policy iteration-like algorithm for finding the optimal Q-factors. Instead of policy evaluation by solving a linear system of equations, our algorithm requires (possibly inexact) solution of a nonlinear system of equations, involving estimates of state costs as well as Q-factors. This is Bellman's equation for an optimal stopping problem that can be solved with simple Q-learning iterations, in the case where a lookup table representation is used; it can also be solved with the Q-learning algorithm of Tsitsiklis and Van Roy [TsV99], in the case where feature-based Q-factor approximations are used. In exact/lookup table representation form, our algorithm admits asynchronous and stochastic iterative implementations, in the spirit of asynchronous/modified policy iteration, with lower overhead and/or more reliable convergence advantages over existing Q-learning schemes. Furthermore, for large-scale problems, where linear basis function approximations and simulation-based temporal difference implementations are used, our algorithm resolves effectively the inherent difficulties of existing schemes due to inadequate exploration

    New Error Bounds for Approximations from Projected Linear Equations

    Get PDF
    Joint Technical Report of U.H. and M.I.T. Technical Report C-2008-43 Dept. Computer Science University of Helsinki and LIDS Report 2797 Dept. EECS M.I.T. July 2008; revised July 2009We consider linear fixed point equations and their approximations by projection on a low dimensional subspace. We derive new bounds on the approximation error of the solution, which are expressed in terms of low dimensional matrices and can be computed by simulation. When the fixed point mapping is a contraction, as is typically the case in Markov decision processes (MDP), one of our bounds is always sharper than the standard contraction-based bounds, and another one is often sharper. The former bound is also tight in a worst-case sense. Our bounds also apply to the non-contraction case, including policy evaluation in MDP with nonstandard projections that enhance exploration. There are no error bounds currently available for this case to our knowledge

    Q-learning and enhanced policy iteration in discounted dynamic programming

    Full text link
    corecore