135,112 research outputs found

    An LQ problem for the heat equation on the halfline with Dirichlet boundary control and noise

    Full text link
    We study a linear quadratic problem for a system governed by the heat equation on a halfline with Dirichlet boundary control and Dirichlet boundary noise. We show that this problem can be reformulated as a stochastic evolution equation in a certain weighted L2 space. An appropriate choice of weight allows us to prove a stronger regularity for the boundary terms appearing in the infinite dimensional state equation. The direct solution of the Riccati equation related to the associated non-stochastic problem is used to find the solution of the problem in feedback form and to write the value function of the problem.Comment: 16 pages. Many misprints have been correcte

    Synchronization of a large number of continuous one-dimensional stochastic elements with time delayed mean field coupling

    Get PDF
    We study synchronization as a means of control of collective behavior of an ensemble of coupled stochastic units in which oscillations are induced merely by external noise. We determine the boundary of the synchronization domain of a large number of onedimensional continuous stochastic elements with time delayed non-homogeneous mean-field coupling. Exact location of the synchronization threshold is shown to be a solution of the boundary value problem (BVP) which was derived from the linearized Fokker-Planck equation. Here the synchronization threshold is found by solving this BVP numerically. Approximate analytics is obtained by expanding the solution of the linearized Fokker-Planck equation into a series of eigenfunctions of the stationary Fokker-Planck operator. Bistable systems with a polynomial and piece-wise linear potential are considered as examples. Multistability and hysteresis is observed in the Langevin equations for finite noise intensity. In the limit of small noise intensities the critical coupling strength was shown to remain finite

    The Optimal Consumption Function in a Brownian Model of Accumulation Part A: The Consumption Function as Solution of a Boundary Value Problem

    Get PDF
    We consider a neo-classical model of optimal economic growth with c.r.r.a. utility in which the traditional deterministic trends representing population growth, technological progress, depreciation and impatience are replaced by Brownian motions with drift. When transformed to 'intensive' units, this is equivalent to a stochastic model of optimal saving with diminishing returns to capital. For the intensive model, we give sufficient conditions for optimality of a consumption plan (open-loop control) comprising a finite welfare condition, a martingale condition for shadow prices and a transversality condition as t ? ?. We then replace these by conditions of optimality of a plan generated by a consumption function (closed-loop control), i.e. a function H(z) expressing log-consumption as a time-invariant, deterministic function of log-capital z. Making use of the exponential martingale formula we replace the martingale condition by a non-linear, non-autonomous second order o.d.e. which an optimal consumption function must satisfy; this has the form H"(z) = F[H'(z),?(z),z], where ?(z) = exp{H(z)-z}. Economic considerations suggest certain limiting values which H'(z) and ?(z) should satisfy as z ? ? ?, thus defining a two-point boundary value problem (b.v.p.) - or rather, a family of problems, depending on the values of parameters. We prove two theorems showing that a consumption function which solves the appropriate b.v.p. generates an optimal plan. Proofs that a unique solution of each b.v.p. exists will be given in a separate paper (Part B).Consumption, capital accumution, Brownian motion, optimisation, orderinary differential equation, boundary value problems.

    Linearly Solvable Stochastic Control Lyapunov Functions

    Get PDF
    This paper presents a new method for synthesizing stochastic control Lyapunov functions for a class of nonlinear stochastic control systems. The technique relies on a transformation of the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation to a linear partial differential equation for a class of problems with a particular constraint on the stochastic forcing. This linear partial differential equation can then be relaxed to a linear differential inclusion, allowing for relaxed solutions to be generated using sum of squares programming. The resulting relaxed solutions are in fact viscosity super/subsolutions, and by the maximum principle are pointwise upper and lower bounds to the underlying value function, even for coarse polynomial approximations. Furthermore, the pointwise upper bound is shown to be a stochastic control Lyapunov function, yielding a method for generating nonlinear controllers with pointwise bounded distance from the optimal cost when using the optimal controller. These approximate solutions may be computed with non-increasing error via a hierarchy of semidefinite optimization problems. Finally, this paper develops a-priori bounds on trajectory suboptimality when using these approximate value functions, as well as demonstrates that these methods, and bounds, can be applied to a more general class of nonlinear systems not obeying the constraint on stochastic forcing. Simulated examples illustrate the methodology.Comment: Published in SIAM Journal of Control and Optimizatio

    Path integrals and symmetry breaking for optimal control theory

    Get PDF
    This paper considers linear-quadratic control of a non-linear dynamical system subject to arbitrary cost. I show that for this class of stochastic control problems the non-linear Hamilton-Jacobi-Bellman equation can be transformed into a linear equation. The transformation is similar to the transformation used to relate the classical Hamilton-Jacobi equation to the Schr\"odinger equation. As a result of the linearity, the usual backward computation can be replaced by a forward diffusion process, that can be computed by stochastic integration or by the evaluation of a path integral. It is shown, how in the deterministic limit the PMP formalism is recovered. The significance of the path integral approach is that it forms the basis for a number of efficient computational methods, such as MC sampling, the Laplace approximation and the variational approximation. We show the effectiveness of the first two methods in number of examples. Examples are given that show the qualitative difference between stochastic and deterministic control and the occurrence of symmetry breaking as a function of the noise.Comment: 21 pages, 6 figures, submitted to JSTA
    • …
    corecore