34 research outputs found

    Backward stochastic differential equations associated to jump Markov processes and applications

    Get PDF
    In this paper we study backward stochastic differential equations (BSDEs) driven by the compensated random measure associated to a given pure jump Markov process X on a general state space K. We apply these results to prove well-posedness of a class of nonlinear parabolic differential equations on K, that generalize the Kolmogorov equation of X. Finally we formulate and solve optimal control problems for Markov jump processes, relating the value function and the optimal control law to an appropriate BSDE that also allows to construct probabilistically the unique solution to the Hamilton-Jacobi-Bellman equation and to identify it with the value function

    Backward stochastic differential equations and optimal control of marked point processes

    Get PDF
    We study a class of backward stochastic differential equations (BSDEs) driven by a random measure or, equivalently, by a marked point process. Under appropriate assumptions we prove well-posedness and continuous dependence of the solution on the data. We next address optimal control problems for point processes of general non-markovian type and show that BSDEs can be used to prove existence of an optimal control and to represent the value function. Finally we introduce a Hamilton-Jacobi-Bellman equation, also stochastic and of backward type, for this class of control problems: when the state space is finite or countable we show that it admits a unique solution which identifies the (random) value function and can be represented by means of the BSDEs introduced above

    Backward stochastic differential equation driven by a marked point process: An elementary approach with an application to optimal control

    Get PDF
    We address a class of backward stochastic differential equations on a bounded interval, where the driving noise is a marked, or multivariate, point process. Assuming that the jump times are totally inaccessible and a technical condition holds (see Assumption (A) below), we prove existence and uniqueness results under Lipschitz conditions on the coefficients. Some counter-examples show that our assumptions are indeed needed. We use a novel approach that allows reduction to a (finite or infinite) system of deterministic differential equations, thus avoiding the use of martingale representation theorems and allowing potential use of standard numerical methods. Finally, we apply the main results to solve an optimal control problem for a marked point process, formulated in a classical way.Comment: Published at http://dx.doi.org/10.1214/15-AAP1132 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    BSDE Representation and Randomized Dynamic Programming Principle for Stochastic Control Problems of Infinite-Dimensional Jump-Diffusions

    Full text link
    We consider a general class of stochastic optimal control problems, where the state process lives in a real separable Hilbert space and is driven by a cylindrical Brownian motion and a Poisson random measure; no special structure is imposed on the coefficients, which are also allowed to be path-dependent; in addition, the diffusion coefficient can be degenerate. For such a class of stochastic control problems, we prove, by means of purely probabilistic techniques based on the so-called randomization method, that the value of the control problem admits a probabilistic representation formula (known as non-linear Feynman-Kac formula) in terms of a suitable backward stochastic differential equation. This probabilistic representation considerably extends current results in the literature on the infinite-dimensional case, and it is also relevant in finite dimension. Such a representation allows to show, in the non-path-dependent (or Markovian) case, that the value function satisfies the so-called randomized dynamic programming principle. As a consequence, we are able to prove that the value function is a viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation, which turns out to be a second-order fully non-linear integro-differential equation in Hilbert space

    Optimal control of semi-Markov processes with a backward stochastic differential equations approach

    Get PDF
    In the present work, we employ backward stochastic differential equations (BSDEs) to study the optimal control problem of semi-Markov processes on a finite horizon, with general state and action spaces. More precisely, we prove that the value function and the optimal control law can be represented by means of the solution of a class of BSDEs driven by a semi-Markov process or, equivalently, by the associated random measure. We also introduce a suitable Hamilton\u2013Jacobi\u2013Bellman (HJB) equation. With respect to the pure jump Markov framework, the HJB equation in the semi-Markov case is characterized by an additional differential term 02a. Taking into account the particular structure of semi-Markov processes, we rewrite the HJB equation in a suitable integral form which involves a directional derivative operator D related to 02a. Then, using a formula of Ito^ type tailor-made for semi-Markov processes and the operator D, we are able to prove that a BSDE of the above-mentioned type provides the unique classical solution to the HJB equation, which identifies the value function of our control problem

    Feedback optimal control for stochastic Volterra equations with completely monotone kernels.

    Get PDF
    In this paper we are concerned with a class of stochastic Volterra integro-dierential problems with completely monotone kernels, where we assume that the noise enters the system when we introduce a control. We start by reformulating the state equation into a semilinear evolution equation which can be treated by semigroup methods. The application to optimal control provide other interesting result and require a precise descriprion of the properties of the generated semigroup. The rst main result of the paper is the proof of existence and uniqueness of a mild solution for the corresponding Hamilton-Jacobi-Bellman (HJB) equation. The main technical point consists in the dierentiability of the BSDE associated with the reformulated equation with respect to its initial datum x

    Optimal control for stochastic heat equation with memory.

    Get PDF
    In this paper, we investigate the existence and uniqueness of solutions for a class of evolutionary integral equations perturbed by a noise arising in the theory of heat conduction. As a motivation of our results, we study an optimal control problem when the control enters the system together with the noise
    corecore