43,528 research outputs found
On Zero-Sum Stochastic Differential Games
We generalize the results of Fleming and Souganidis (1989) on zero sum
stochastic differential games to the case when the controls are unbounded. We
do this by proving a dynamic programming principle using a covering argument
instead of relying on a discrete approximation (which is used along with a
comparison principle by Fleming and Souganidis). Also, in contrast with Fleming
and Souganidis, we define our pay-off through a doubly reflected backward
stochastic differential equation. The value function (in the degenerate case of
a single controller) is closely related to the second order doubly reflected
BSDEs.Comment: Key Words: Zero-sum stochastic differential games, Elliott-Kalton
strategies, dynamic programming principle, stability under pasting, doubly
reflected backward stochastic differential equations, viscosity solutions,
obstacle problem for fully non-linear PDEs, shifted processes, shifted SDEs,
second-order doubly reflected backward stochastic differential equation
Lyapunov stabilizability of controlled diffusions via a superoptimality principle for viscosity solutions
We prove optimality principles for semicontinuous bounded viscosity solutions
of Hamilton-Jacobi-Bellman equations. In particular we provide a representation
formula for viscosity supersolutions as value functions of suitable obstacle
control problems. This result is applied to extend the Lyapunov direct method
for stability to controlled Ito stochastic differential equations. We define
the appropriate concept of Lyapunov function to study the stochastic open loop
stabilizability in probability and the local and global asymptotic
stabilizability (or asymptotic controllability). Finally we illustrate the
theory with some examples.Comment: 22 page
Linearly Solvable Stochastic Control Lyapunov Functions
This paper presents a new method for synthesizing stochastic control Lyapunov
functions for a class of nonlinear stochastic control systems. The technique
relies on a transformation of the classical nonlinear Hamilton-Jacobi-Bellman
partial differential equation to a linear partial differential equation for a
class of problems with a particular constraint on the stochastic forcing. This
linear partial differential equation can then be relaxed to a linear
differential inclusion, allowing for relaxed solutions to be generated using
sum of squares programming. The resulting relaxed solutions are in fact
viscosity super/subsolutions, and by the maximum principle are pointwise upper
and lower bounds to the underlying value function, even for coarse polynomial
approximations. Furthermore, the pointwise upper bound is shown to be a
stochastic control Lyapunov function, yielding a method for generating
nonlinear controllers with pointwise bounded distance from the optimal cost
when using the optimal controller. These approximate solutions may be computed
with non-increasing error via a hierarchy of semidefinite optimization
problems. Finally, this paper develops a-priori bounds on trajectory
suboptimality when using these approximate value functions, as well as
demonstrates that these methods, and bounds, can be applied to a more general
class of nonlinear systems not obeying the constraint on stochastic forcing.
Simulated examples illustrate the methodology.Comment: Published in SIAM Journal of Control and Optimizatio
Automating embedded analysis capabilities and managing software complexity in multiphysics simulation part I: template-based generic programming
An approach for incorporating embedded simulation and analysis capabilities
in complex simulation codes through template-based generic programming is
presented. This approach relies on templating and operator overloading within
the C++ language to transform a given calculation into one that can compute a
variety of additional quantities that are necessary for many state-of-the-art
simulation and analysis algorithms. An approach for incorporating these ideas
into complex simulation codes through general graph-based assembly is also
presented. These ideas have been implemented within a set of packages in the
Trilinos framework and are demonstrated on a simple problem from chemical
engineering
Stochastic Target Games and Dynamic Programming via Regularized Viscosity Solutions
We study a class of stochastic target games where one player tries to find a
strategy such that the state process almost-surely reaches a given target, no
matter which action is chosen by the opponent. Our main result is a geometric
dynamic programming principle which allows us to characterize the value
function as the viscosity solution of a non-linear partial differential
equation. Because abstract mea-surable selection arguments cannot be used in
this context, the main obstacle is the construction of measurable
almost-optimal strategies. We propose a novel approach where smooth
supersolutions are used to define almost-optimal strategies of Markovian type,
similarly as in ver-ification arguments for classical solutions of
Hamilton--Jacobi--Bellman equations. The smooth supersolutions are constructed
by an exten-sion of Krylov's method of shaken coefficients. We apply our
results to a problem of option pricing under model uncertainty with different
interest rates for borrowing and lending.Comment: To appear in MO
- …