6,133 research outputs found

    Feynman-Kac representation of fully nonlinear PDEs and applications

    Get PDF
    The classical Feynman-Kac formula states the connection between linear parabolic partial differential equations (PDEs), like the heat equation, and expectation of stochastic processes driven by Brownian motion. It gives then a method for solving linear PDEs by Monte Carlo simulations of random processes. The extension to (fully)nonlinear PDEs led in the recent years to important developments in stochastic analysis and the emergence of the theory of backward stochastic differential equations (BSDEs), which can be viewed as nonlinear Feynman-Kac formulas. We review in this paper the main ideas and results in this area, and present implications of these probabilistic representations for the numerical resolution of nonlinear PDEs, together with some applications to stochastic control problems and model uncertainty in finance

    Adaptive Continuous time Markov Chain Approximation Model to General Jump-Diffusions

    Get PDF
    We propose a non-equidistant Q rate matrix formula and an adaptive numerical algorithm for a continuous time Markov chain to approximate jump-diffusions with affine or non-affine functional specifications. Our approach also accommodates state-dependent jump intensity and jump distribution, a flexibility that is very hard to achieve with other numerical methods. The Kologorov-Smirnov test shows that the proposed Markov chain transition density converges to the one given by the likelihood expansion formula as in Ait-Sahalia (2008). We provide numerical examples for European stock option pricing in Black and Scholes (1973), Merton (1976) and Kou (2002)

    The Hitchhiker's Guide to Nonlinear Filtering

    Get PDF
    Nonlinear filtering is the problem of online estimation of a dynamic hidden variable from incoming data and has vast applications in different fields, ranging from engineering, machine learning, economic science and natural sciences. We start our review of the theory on nonlinear filtering from the simplest `filtering' task we can think of, namely static Bayesian inference. From there we continue our journey through discrete-time models, which is usually encountered in machine learning, and generalize to and further emphasize continuous-time filtering theory. The idea of changing the probability measure connects and elucidates several aspects of the theory, such as the parallels between the discrete- and continuous-time problems and between different observation models. Furthermore, it gives insight into the construction of particle filtering algorithms. This tutorial is targeted at scientists and engineers and should serve as an introduction to the main ideas of nonlinear filtering, and as a segway to more advanced and specialized literature.Comment: 64 page

    General existence and uniqueness of viscosity solutions for impulse control of jump-diffusions

    Full text link
    General theorems for existence and uniqueness of viscosity solutions for Hamilton-Jacobi-Bellman quasi-variational inequalities (HJBQVI) with integral term are established. Such nonlinear partial integro-differential equations (PIDE) arise in the study of combined impulse and stochastic control for jump-diffusion processes. The HJBQVI consists of an HJB part (for stochastic control) combined with a nonlocal impulse intervention term. Existence results are proved via stochastic means, whereas our uniqueness (comparison) results adapt techniques from viscosity solution theory. This paper is to our knowledge the first treating rigorously impulse control for jump-diffusion processes in a general viscosity solution framework; the jump part may have infinite activity. In the proofs, no prior continuity of the value function is assumed, quadratic costs are allowed, and elliptic and parabolic results are presented for solutions possibly unbounded at infinity

    Almost Sure Stabilization for Adaptive Controls of Regime-switching LQ Systems with A Hidden Markov Chain

    Full text link
    This work is devoted to the almost sure stabilization of adaptive control systems that involve an unknown Markov chain. The control system displays continuous dynamics represented by differential equations and discrete events given by a hidden Markov chain. Different from previous work on stabilization of adaptive controlled systems with a hidden Markov chain, where average criteria were considered, this work focuses on the almost sure stabilization or sample path stabilization of the underlying processes. Under simple conditions, it is shown that as long as the feedback controls have linear growth in the continuous component, the resulting process is regular. Moreover, by appropriate choice of the Lyapunov functions, it is shown that the adaptive system is stabilizable almost surely. As a by-product, it is also established that the controlled process is positive recurrent
    • …
    corecore