3,540 research outputs found
Anticipated backward stochastic differential equations
In this paper we discuss new types of differential equations which we call
anticipated backward stochastic differential equations (anticipated BSDEs). In
these equations the generator includes not only the values of solutions of the
present but also the future. We show that these anticipated BSDEs have unique
solutions, a comparison theorem for their solutions, and a duality between them
and stochastic differential delay equations.Comment: Published in at http://dx.doi.org/10.1214/08-AOP423 the Annals of
Probability (http://www.imstat.org/aop/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Optimal adaptive control with separable drift uncertainty
We consider a problem of stochastic optimal control with separable drift
uncertainty in strong formulation on a finite horizon. The drift coefficient of
the state is multiplicatively influenced by an unknown random variable
, while admissible controls are required to be adapted to the
observation filtration. Choosing a control actively influences the state and
information acquisition simultaneously and comes with a learning effect. The
problem, initially non-Markovian, is embedded into a higher-dimensional
Markovian, full information control problem with control-dependent filtration
and noise. To that problem, we apply the stochastic Perron method to
characterize the value function as the unique viscosity solution to the HJB
equation, explicitly construct -optimal controls and show that the
values of strong and weak formulations agree. Numerical illustrations show a
significant difference between the adaptive control and the certainty
equivalence control
Recommended from our members
New Directions in Simulation, Control and Analysis for Interfaces and Free Boundaries
The field of mathematical and numerical analysis of systems of nonlinear partial differential equations involving interfaces and free boundaries is a flourishing area of research. Many such systems arise from mathematical models in material science, fluid dynamics and biology, for example phase separation in alloys, epitaxial growth, dynamics of multiphase fluids, evolution of cell membranes and in industrial processes such as crystal growth. The governing equations for the dynamics of the interfaces in many of these applications involve surface tension expressed in terms of the mean curvature and a driving force. Here the forcing terms depend on variables that are solutions of additional partial differential equations which hold either on the interface itself or in the surrounding bulk regions. Often in applications of these mathematical models, suitable performance indices and appropriate control actions have to be specified. Mathematically this leads to optimization problems with partial differential equation constraints including free boundaries. Because of the maturity of the field of computational free boundary problems it is now timely to consider such control problems
Controlled Stochastic Differential Equations under Poisson Uncertainty and with Unbounded Utility
The present paper is concerned with the optimal control of stochastic differential equations, where uncertainty stems from one or more independent Poisson processes. Optimal behavior in such a setup (e.g., optimal consumption) is usually determined by employing the Hamilton-Jacobi-Bellman equation. This, however, requires strong assumptions on the model, such as a bounded utility function and bounded coefficients in the controlled differential equation. The present paper relaxes these assumptions. We show that one can still use the Hamilton-Jacobi-Bellman equation as a necessary criterion for optimality if the utility function and the coefficients are linearly bounded. We also derive sufficiency in a verification theorem without imposing any boundedness condition at all. It is finally shown that, under very mild assumptions, an optimal Markov control is optimal even within the class of general controls. --Stochastic differential equation,Poisson process,Bellman equation
Splitting methods for SPDEs: From robustness to financial engineering, optimal control and nonlinear filtering
In this survey chapter we give an overview of recent applications of the splitting method to stochastic (partial) differential equations, that is, differential equations that evolve under the influence of noise. We discuss weak and strong approximations schemes. The applications range from the management of risk, financial engineering, optimal control and nonlinear filtering to the viscosity theory of nonlinear SPDEs
Strategically-Timed Actions in Stochastic Differential Games
Financial systems are rich in interactions amenable to description by stochastic control theory. Optimal stochastic control theory is an elegant mathematical framework in which a controller, profitably alters the dynamics of a stochastic system by exercising costly control inputs. If the system includes more than one agent, the appropriate modelling framework is stochastic differential game theory — a multiplayer generalisation of stochastic control theory. There are numerous environments in which financial agents incur fixed minimal costs when adjusting their investment positions; trading environments with transaction costs and real options pricing are important examples. The presence of fixed minimal adjustment costs produces adjustment stickiness as agents now enact their investment adjustments over a sequence of discrete points. Despite the fundamental relevance of adjustment stickiness within economic theory, in stochastic differential game theory, the set of players’ modifications to the system dynamics is mainly restricted to a continuous class of controls. Under this assumption, players modify their positions through infinitesimally fine adjustments over the problem horizon. This renders such models unsuitable for modelling systems with fixed minimal adjustment costs. To this end, we present a detailed study of strategic interactions with fixed minimal adjustment costs. We perform a comprehensive study of a new stochastic differential game of impulse control and stopping on a jump-diffusion process and, conduct a detailed investigation of two-player impulse control stochastic differential games. We establish the existence of a value of the games and show that the value is a unique (viscosity) solution to a double obstacle problem which is characterised in terms of a solution to a non-linear partial differential equation (PDE). The study is contextualised within two new models of investment that tackle a dynamic duopoly investment problem and an optimal liquidity control and lifetime ruin problem. It is then shown that each optimal investment strategy can be recovered from the equilibrium strategies of the corresponding stochastic differential game. Lastly, we introduce a dynamic principal-agent model with a self-interested agent that faces minimally bounded adjustment costs. For this setting, we show for the first time that the principal can sufficiently distort that agent’s preferences so that the agent finds it optimal to execute policies that maximise the principal’s payoff in the presence of fixed minimal costs
- …