98 research outputs found
Backward stochastic differential equations associated to jump Markov processes and applications
In this paper we study backward stochastic differential equations (BSDEs)
driven by the compensated random measure associated to a given pure jump Markov
process X on a general state space K. We apply these results to prove
well-posedness of a class of nonlinear parabolic differential equations on K,
that generalize the Kolmogorov equation of X. Finally we formulate and solve
optimal control problems for Markov jump processes, relating the value function
and the optimal control law to an appropriate BSDE that also allows to
construct probabilistically the unique solution to the Hamilton-Jacobi-Bellman
equation and to identify it with the value function
Backward stochastic differential equations and optimal control of marked point processes
We study a class of backward stochastic differential equations (BSDEs) driven
by a random measure or, equivalently, by a marked point process. Under
appropriate assumptions we prove well-posedness and continuous dependence of
the solution on the data. We next address optimal control problems for point
processes of general non-markovian type and show that BSDEs can be used to
prove existence of an optimal control and to represent the value function.
Finally we introduce a Hamilton-Jacobi-Bellman equation, also stochastic and of
backward type, for this class of control problems: when the state space is
finite or countable we show that it admits a unique solution which identifies
the (random) value function and can be represented by means of the BSDEs
introduced above
Dual and backward SDE representation for optimal control of non-Markovian SDEs
We study optimal stochastic control problem for non-Markovian stochastic
differential equations (SDEs) where the drift, diffusion coefficients, and gain
functionals are path-dependent, and importantly we do not make any ellipticity
assumption on the SDE. We develop a controls randomization approach, and prove
that the value function can be reformulated under a family of dominated
measures on an enlarged filtered probability space. This value function is then
characterized by a backward SDE with nonpositive jumps under a single
probability measure, which can be viewed as a path-dependent version of the
Hamilton-Jacobi-Bellman equation, and an extension to expectation
Stochastic maximum principle for optimal control of a class of nonlinear SPDEs with dissipative drift
We prove a version of the stochastic maximum principle, in the sense of
Pontryagin, for the finite horizon optimal control of a stochastic partial
differential equation driven by an infinite dimensional additive noise. In
particular we treat the case in which the non-linear term is of Nemytskii type,
dissipative and with polynomial growth. The performance functional to be
optimized is fairly general and may depend on point evaluation of the
controlled equation. The results can be applied to a large class of non-linear
parabolic equations such as reaction-diffusion equations
Backward stochastic differential equation driven by a marked point process: An elementary approach with an application to optimal control
We address a class of backward stochastic differential equations on a bounded
interval, where the driving noise is a marked, or multivariate, point process.
Assuming that the jump times are totally inaccessible and a technical condition
holds (see Assumption (A) below), we prove existence and uniqueness results
under Lipschitz conditions on the coefficients. Some counter-examples show that
our assumptions are indeed needed. We use a novel approach that allows
reduction to a (finite or infinite) system of deterministic differential
equations, thus avoiding the use of martingale representation theorems and
allowing potential use of standard numerical methods. Finally, we apply the
main results to solve an optimal control problem for a marked point process,
formulated in a classical way.Comment: Published at http://dx.doi.org/10.1214/15-AAP1132 in the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Stochastic maximum principle for optimal control of SPDEs
In this note, we give the stochastic maximum principle for optimal control of
stochastic PDEs in the general case (when the control domain need not be convex
and the diffusion coefficient can contain a control variable)
Stochastic Maximum Principle for Optimal Control ofPartial Differential Equations Driven by White Noise
We prove a stochastic maximum principle ofPontryagin's type for the optimal
control of a stochastic partial differential equationdriven by white noise in
the case when the set of control actions is convex. Particular attention is
paid to well-posedness of the adjoint backward stochastic differential equation
and the regularity properties of its solution with values in
infinite-dimensional spaces
- …