1,772 research outputs found

    Lyapunov stabilizability of controlled diffusions via a superoptimality principle for viscosity solutions

    Full text link
    We prove optimality principles for semicontinuous bounded viscosity solutions of Hamilton-Jacobi-Bellman equations. In particular we provide a representation formula for viscosity supersolutions as value functions of suitable obstacle control problems. This result is applied to extend the Lyapunov direct method for stability to controlled Ito stochastic differential equations. We define the appropriate concept of Lyapunov function to study the stochastic open loop stabilizability in probability and the local and global asymptotic stabilizability (or asymptotic controllability). Finally we illustrate the theory with some examples.Comment: 22 page

    Non-Smooth Stochastic Lyapunov Functions With Weak Extension of Viscosity Solutions

    Full text link
    This paper proposes a notion of viscosity weak supersolutions to build a bridge between stochastic Lyapunov stability theory and viscosity solution theory. Different from ordinary differential equations, stochastic differential equations can have the origins being stable despite having no smooth stochastic Lyapunov functions (SLFs). The feature naturally requires that the related Lyapunov equations are illustrated via viscosity solution theory, which deals with non-smooth solutions to partial differential equations. This paper claims that stochastic Lyapunov stability theory needs a weak extension of viscosity supersolutions, and the proposed viscosity weak supersolutions describe non-smooth SLFs ensuring a large class of the origins being noisily (asymptotically) stable and (asymptotically) stable in probability. The contribution of the non-smooth SLFs are confirmed by a few examples; especially, they ensure that all the linear-quadratic-Gaussian (LQG) controlled systems have the origins being noisily asymptotically stable for any additive noises

    Noise-Induced Stabilization of Planar Flows I

    Get PDF
    We show that the complex-valued ODE \begin{equation*} \dot z_t = a_{n+1} z^{n+1} + a_n z^n+\cdots+a_0, \end{equation*} which necessarily has trajectories along which the dynamics blows up in finite time, can be stabilized by the addition of an arbitrarily small elliptic, additive Brownian stochastic term. We also show that the stochastic perturbation has a unique invariant measure which is heavy-tailed yet is uniformly, exponentially attracting. The methods turn on the construction of Lyapunov functions. The techniques used in the construction are general and can likely be used in other settings where a Lyapunov function is needed. This is a two-part paper. This paper, Part I, focuses on general Lyapunov methods as applied to a special, simplified version of the problem. Part II of this paper extends the main results to the general setting.Comment: Part one of a two part pape

    Optimal fluctuations and the control of chaos.

    Get PDF
    The energy-optimal migration of a chaotic oscillator from one attractor to another coexisting attractor is investigated via an analogy between the Hamiltonian theory of fluctuations and Hamiltonian formulation of the control problem. We demonstrate both on physical grounds and rigorously that the Wentzel-Freidlin Hamiltonian arising in the analysis of fluctuations is equivalent to Pontryagin's Hamiltonian in the control problem with an additive linear unrestricted control. The deterministic optimal control function is identied with the optimal fluctuational force. Numerical and analogue experiments undertaken to verify these ideas demonstrate that, in the limit of small noise intensity, fluctuational escape from the chaotic attractor occurs via a unique (optimal) path corresponding to a unique (optimal) fluctuational force. Initial conditions on the chaotic attractor are identified. The solution of the boundary value control problem for the Pontryagin Hamiltonian is found numerically. It is shown that this solution is approximated very accurately by the optimal fluctuational force found using statistical analysis of the escape trajectories. A second series of numerical experiments on the deterministic system (i.e. in the absence of noise) show that a control function of precisely the same shape and magnitude is indeed able to instigate escape. It is demonstrated that this control function minimizes the cost functional and the corresponding energy is found to be smaller than that obtained with some earlier adaptive control algorithms

    Deterministic continutation of stochastic metastable equilibria via Lyapunov equations and ellipsoids

    Full text link
    Numerical continuation methods for deterministic dynamical systems have been one of the most successful tools in applied dynamical systems theory. Continuation techniques have been employed in all branches of the natural sciences as well as in engineering to analyze ordinary, partial and delay differential equations. Here we show that the deterministic continuation algorithm for equilibrium points can be extended to track information about metastable equilibrium points of stochastic differential equations (SDEs). We stress that we do not develop a new technical tool but that we combine results and methods from probability theory, dynamical systems, numerical analysis, optimization and control theory into an algorithm that augments classical equilibrium continuation methods. In particular, we use ellipsoids defining regions of high concentration of sample paths. It is shown that these ellipsoids and the distances between them can be efficiently calculated using iterative methods that take advantage of the numerical continuation framework. We apply our method to a bistable neural competition model and a classical predator-prey system. Furthermore, we show how global assumptions on the flow can be incorporated - if they are available - by relating numerical continuation, Kramers' formula and Rayleigh iteration.Comment: 29 pages, 7 figures [Fig.7 reduced in quality due to arXiv size restrictions]; v2 - added Section 9 on Kramers' formula, additional computations, corrected typos, improved explanation

    Minimum Restraint Functions for unbounded dynamics: general and control-polynomial systems

    Full text link
    We consider an exit-time minimum problem with a running cost, l≥0l\geq 0 and unbounded controls. The occurrence of points where l=0l=0 can be regarded as a transversality loss. Furthermore, since controls range over unbounded sets, the family of admissible trajectories may lack important compactness properties. In the first part of the paper we show that the existence of a p0p_0-minimum restraint function provides not only global asymptotic controllability (despite non-transversality) but also a state-dependent upper bound for the value function (provided p0>0p_0>0). This extends to unbounded dynamics a former result which heavily relied on the compactness of the control set. In the second part of the paper we apply the general result to the case when the system is polynomial in the control variable. Some elementary, algebraic, properties of the convex hull of vector-valued polynomials' ranges allow some simplifications of the main result, in terms of either near-affine-control systems or reduction to weak subsystems for the original dynamics.Comment: arXiv admin note: text overlap with arXiv:1503.0344
    • …
    corecore