8 research outputs found

    Aversion to ambiguity and model misspecification in dynamic stochastic environments

    Get PDF
    Preferences that accommodate aversion to subjective uncertainty and its potential misspecification in dynamic settings are a valuable tool of analysis in many disciplines. By generalizing previous analyses, we propose a tractable approach to incorporating broadly conceived responses to uncertainty. We illustrate our approach on some stylized stochastic environments. By design, these discrete time environments have revealing continuous time limits. Drawing on these illustrations, we construct recursive representations of intertemporal preferences that allow for penalized and smooth ambiguity aversion to subjective uncertainty. These recursive representations imply continuous time limiting Hamilton–Jacobi–Bellman equations for solving control problems in the presence of uncertainty.Published versio

    Variational approach to rare event simulation using least-squares regression

    Get PDF
    We propose an adaptive importance sampling scheme for the simulation of rare events when the underlying dynamics is given by a diffusion. The scheme is based on a Gibbs variational principle that is used to determine the optimal (i.e. zero-variance) change of measure and exploits the fact that the latter can be rephrased as a stochastic optimal control problem. The control problem can be solved by a stochastic approximation algorithm, using the Feynman-Kac representation of the associated dynamic programming equations, and we discuss numerical aspects for high-dimensional problems along with simple toy examples.Comment: 28 pages, 7 figure

    Meshless discretization of LQ-type stochastic control problems

    Get PDF
    Abstract. We propose a novel Galerkin discretization scheme for stochastic optimal control problems on an indefinite time horizon. The control problems are linear-quadratic in the controls, but possibly nonlinear in the state variables, and the discretization is based on the fact that problems of this kind can be transformed into linear boundary value problems by a logarithmic transformation. We show that the discretized linear problem is dual to a Markov decision problem, the precise form of which depends on the chosen Galerkin basis. We prove a strong error bound in L2 for the general scheme and discuss two special cases: a variant of the known Markov chain approximation obtained from a basis of characteristic functions of a box discretization, and a sparse approximation that uses the basis of committor functions of metastable sets of the dynamics; the latter is particularly suited for high-dimensional systems, e.g., control problems in molecular dynamics. We illustrate the method with several numerical examples, one being the optimal control of Alanine dipeptide to its helical conformation. 1. Introduction. A large body of research is concerned with the question: How well can a continuous diffusion in an energy landscape be approximated by a Markov jump process (MJP) in the regime of low temperatures? Qualitatively, the approxima

    Optimal and Robust Control for a Class of Nonlinear Stochastic Systems

    Get PDF
    This thesis focuses on theoretical research of optimal and robust control theory for a class of nonlinear stochastic systems. The nonlinearities that appear in the diffusion terms are of a square-root type. Under such systems the following problems are investigated: optimal stochastic control in both finite and infinite horizon; robust stabilization and robust H∞ control; H₂/H∞ control in both finite and infinite horizon; and risk-sensitive control. The importance of this work is that explicit optimal linear controls are obtained, which is a very rare case in the nonlinear system. This is regarded as an advantage because with explicit solutions, our work becomes easier to be applied into the real problems. Apart from the mathematical results obtained, we have also introduced some applications to finance
    corecore