259 research outputs found

    Dynamic programming for optimal stopping via pseudo-regression

    Get PDF
    We introduce new variants of classical regression-based algorithms for optimal stopping problems based on computation of regression coefficients by Monte Carlo approximation of the corresponding L2 inner products instead of the least-squares error functional. Coupled with new proposals for simulation of the underlying samples, we call the approach "pseudo regression". We show that the approach leads to asymptotically smaller errors, as well as less computational cost. The analysis is justified by numerical examples

    Forward-reverse EM algorithm for Markov chains

    Get PDF
    We develop an EM algorithm for estimating parameters that determine the dynamics of a discrete time Markov chain evolving through a certain measurable state space. As a key tool for the construction of the EM method we also develop forward-reverse representations for Markov chains conditioned on a certain terminal state. These representations may be considered as an extension of the earlier work of Bayer and Schoenmakers (2013) on conditional diffusions. We present several experiments and consider the convergence of the new EM algorithm

    Optimal stopping with signatures

    Get PDF
    We propose a new method for solving optimal stopping problems (such as American option pricing in finance) under minimal assumptions on the underlying stochastic process. We consider classic and randomized stopping times represented by linear functionals of the associated rough path signature, and prove that maximizing over the class of signature stopping times, in fact, solves the original optimal stopping problem. Using the algebraic properties of the signature, we can then recast the problem as a (deterministic) optimization problem depending only on the (truncated) expected signature. The only assumption on the process is that it is a continuous (geometric) random rough path. Hence, the theory encompasses processes such as fractional Brownian motion which fail to be either semi-martingales or Markov processes

    From rough path estimates to multilevel Monte Carlo

    Get PDF
    Discrete approximations to solutions of stochastic differential equations are well-known to converge with strong rate 1/2. Such rates have played a key-role in Giles' multilevel Monte Carlo method [Giles, Oper. Res. 2008] which gives a substantial reduction of the computational effort necessary for the evaluation of diffusion functionals. In the present article similar results are established for large classes of rough differential equations driven by Gaussian processes (including fractional Brownian motion with H>1/4 as special case)

    Adaptive SDE based interpolation for random PDEs

    Get PDF
    A numerical method for the fully adaptive sampling and interpolation of PDE with random data is presented. It is based on the idea that the solution of the PDE with stochastic data can be represented as conditional expectation of a functional of a corresponding stochastic differential equation (SDE). The physical domain is decomposed subject to a non-uniform grid and a classical Euler scheme is employed to approximately solve the SDE at grid vertices. Interpolation with a conforming finite element basis is employed to reconstruct a global solution of the problem. An a posteriori error estimator is introduced which provides a measure of the different error contributions. This facilitates the formulation of an adaptive algorithm to control the overall error by either reducing the stochastic error by locally evaluating more samples, or the approximation error by locally refining the underlying mesh. Numerical examples illustrate the performance of the presented novel method

    Reinforced optimal control

    Get PDF
    Least squares Monte Carlo methods are a popular numerical approximation method for solving stochastic control problems. Based on dynamic programming, their key feature is the approximation of the conditional expectation of future rewards by linear least squares regression. Hence, the choice of basis functions is crucial for the accuracy of the method. Earlier work by some of us [Belomestny, Schoenmakers, Spokoiny, Zharkynbay, Commun. Math. Sci., 18(1):109?121, 2020] proposes to reinforce the basis functions in the case of optimal stopping problems by already computed value functions for later times, thereby considerably improving the accuracy with limited additional computational cost. We extend the reinforced regression method to a general class of stochastic control problems, while considerably improving the method?s efficiency, as demonstrated by substantial numerical examples as well as theoretical analysis

    SDE based regression for random PDEs

    Get PDF
    A simulation based method for the numerical solution of PDE with random coefficients is presented. By the Feynman-Kac formula, the solution can be represented as conditional expectation of a functional of a corresponding stochastic differential equation driven by independent noise. A time discretization of the SDE for a set of points in the domain and a subsequent Monte Carlo regression lead to an approximation of the global solution of the random PDE. We provide an initial error and complexity analysis of the proposed method along with numerical examples illustrating its behaviour
    corecore