30,744 research outputs found

    The turnpike property in finite-dimensional nonlinear optimal control

    Get PDF
    Turnpike properties have been established long time ago in finite-dimensional optimal control problems arising in econometry. They refer to the fact that, under quite general assumptions, the optimal solutions of a given optimal control problem settled in large time consist approximately of three pieces, the first and the last of which being transient short-time arcs, and the middle piece being a long-time arc staying exponentially close to the optimal steady-state solution of an associated static optimal control problem. We provide in this paper a general version of a turnpike theorem, valuable for nonlinear dynamics without any specific assumption, and for very general terminal conditions. Not only the optimal trajectory is shown to remain exponentially close to a steady-state, but also the corresponding adjoint vector of the Pontryagin maximum principle. The exponential closedness is quantified with the use of appropriate normal forms of Riccati equations. We show then how the property on the adjoint vector can be adequately used in order to initialize successfully a numerical direct method, or a shooting method. In particular, we provide an appropriate variant of the usual shooting method in which we initialize the adjoint vector, not at the initial time, but at the middle of the trajectory

    Finite Mechanical Proxies for a Class of Reducible Continuum Systems

    Full text link
    We present the exact finite reduction of a class of nonlinearly perturbed wave equations, based on the Amann-Conley-Zehnder paradigm. By solving an inverse eigenvalue problem, we establish an equivalence between the spectral finite description derived from A-C-Z and a discrete mechanical model, a well definite finite spring-mass system. By doing so, we decrypt the abstract information encoded in the finite reduction and obtain a physically sound proxy for the continuous problem.Comment: 15 pages, 3 figure

    H∞ Control of Nonlinear Systems: A Class of Controllers

    Get PDF
    The standard state space solutions to the H∞ control problem for linear time invariant systems are generalized to nonlinear time-invariant systems. A class of nonlinear H∞-controllers are parameterized as nonlinear fractional transformations on contractive, stable free nonlinear parameters. As in the linear case, the H∞ control problem is solved by its reduction to four simpler special state space problems, together with a separation argument. Another byproduct of this approach is that the sufficient conditions for H∞ control problem to be solved are also derived with this machinery. The solvability for nonlinear H∞-control problem requires positive definite solutions to two parallel decoupled Hamilton-Jacobi inequalities and these two solutions satisfy an additional coupling condition. An illustrative example, which deals with a passive plant, is given at the end

    Contraction analysis of switched Filippov systems via regularization

    Get PDF
    We study incremental stability and convergence of switched (bimodal) Filippov systems via contraction analysis. In particular, by using results on regularization of switched dynamical systems, we derive sufficient conditions for convergence of any two trajectories of the Filippov system between each other within some region of interest. We then apply these conditions to the study of different classes of Filippov systems including piecewise smooth (PWS) systems, piecewise affine (PWA) systems and relay feedback systems. We show that contrary to previous approaches, our conditions allow the system to be studied in metrics other than the Euclidean norm. The theoretical results are illustrated by numerical simulations on a set of representative examples that confirm their effectiveness and ease of application.Comment: Preprint submitted to Automatic

    Linearly Solvable Stochastic Control Lyapunov Functions

    Get PDF
    This paper presents a new method for synthesizing stochastic control Lyapunov functions for a class of nonlinear stochastic control systems. The technique relies on a transformation of the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation to a linear partial differential equation for a class of problems with a particular constraint on the stochastic forcing. This linear partial differential equation can then be relaxed to a linear differential inclusion, allowing for relaxed solutions to be generated using sum of squares programming. The resulting relaxed solutions are in fact viscosity super/subsolutions, and by the maximum principle are pointwise upper and lower bounds to the underlying value function, even for coarse polynomial approximations. Furthermore, the pointwise upper bound is shown to be a stochastic control Lyapunov function, yielding a method for generating nonlinear controllers with pointwise bounded distance from the optimal cost when using the optimal controller. These approximate solutions may be computed with non-increasing error via a hierarchy of semidefinite optimization problems. Finally, this paper develops a-priori bounds on trajectory suboptimality when using these approximate value functions, as well as demonstrates that these methods, and bounds, can be applied to a more general class of nonlinear systems not obeying the constraint on stochastic forcing. Simulated examples illustrate the methodology.Comment: Published in SIAM Journal of Control and Optimizatio
    • …
    corecore