1,618 research outputs found

    Lipschitzian Regularity of the Minimizing Trajectories for Nonlinear Optimal Control Problems

    Full text link
    We consider the Lagrange problem of optimal control with unrestricted controls and address the question: under what conditions we can assure optimal controls are bounded? This question is related to the one of Lipschitzian regularity of optimal trajectories, and the answer to it is crucial for closing the gap between the conditions arising in the existence theory and necessary optimality conditions. Rewriting the Lagrange problem in a parametric form, we obtain a relation between the applicability conditions of the Pontryagin maximum principle to the later problem and the Lipschitzian regularity conditions for the original problem. Under the standard hypotheses of coercivity of the existence theory, the conditions imply that the optimal controls are essentially bounded, assuring the applicability of the classical necessary optimality conditions like the Pontryagin maximum principle. The result extends previous Lipschitzian regularity results to cover optimal control problems with general nonlinear dynamics.Comment: This research was partially presented, as an oral communication, at the international conference EQUADIFF 10, Prague, August 27-31, 2001. Accepted for publication in the journal Mathematics of Control, Signals, and Systems (MCSS). See http://www.mat.ua.pt/delfim for other work

    Solving Optimal Control Problems for Delayed Control-Affine Systems with Quadratic Cost by Numerical Continuation

    Full text link
    - In this paper we introduce a new method to solve fixed-delay optimal control problems which exploits numerical homotopy procedures. It is known that solving this kind of problems via indirect methods is complex and computationally demanding because their implementation is faced with two difficulties: the extremal equations are of mixed type, and besides, the shooting method has to be carefully initialized. Here, starting from the solution of the non-delayed version of the optimal control problem, the delay is introduced by numerical homotopy methods. Convergence results, which ensure the effectiveness of the whole procedure, are provided. The numerical efficiency is illustrated on an example

    Caratheodory-Equivalence, Noether Theorems, and Tonelli Full-Regularity in the Calculus of Variations and Optimal Control

    Full text link
    We study, in a unified way, the following questions related to the properties of Pontryagin extremals for optimal control problems with unrestricted controls: i) How the transformations, which define the equivalence of two problems, transform the extremals? ii) How to obtain quantities which are conserved along any extremal? iii) How to assure that the set of extremals include the minimizers predicted by the existence theory? These questions are connected to: i) the Caratheodory method which establishes a correspondence between the minimizing curves of equivalent problems; ii) the interplay between the concept of invariance and the theory of optimality conditions in optimal control, which are the concern of the theorems of Noether; iii) regularity conditions for the minimizers and the work pioneered by Tonelli.Comment: 24 pages, Submitted for publication in a Special Issue of the J. of Mathematical Science

    Time Minimal Trajectories for a Spin 1/2 Particle in a Magnetic Field

    Full text link
    In this paper we consider the minimum time population transfer problem for the zz-component of the spin of a (spin 1/2) particle driven by a magnetic field, controlled along the x axis, with bounded amplitude. On the Bloch sphere (i.e. after a suitable Hopf projection), this problem can be attacked with techniques of optimal syntheses on 2-D manifolds. Let (E,E)(-E,E) be the two energy levels, and Ω(t)M|\Omega(t)|\leq M the bound on the field amplitude. For each couple of values EE and MM, we determine the time optimal synthesis starting from the level E-E and we provide the explicit expression of the time optimal trajectories steering the state one to the state two, in terms of a parameter that can be computed solving numerically a suitable equation. For M/E<<1M/E<<1, every time optimal trajectory is bang-bang and in particular the corresponding control is periodic with frequency of the order of the resonance frequency ωR=2E\omega_R=2E. On the other side, for M/E>1M/E>1, the time optimal trajectory steering the state one to the state two is bang-bang with exactly one switching. Fixed EE we also prove that for MM\to\infty the time needed to reach the state two tends to zero. In the case M/E>1M/E>1 there are time optimal trajectories containing a singular arc. Finally we compare these results with some known results of Khaneja, Brockett and Glaser and with those obtained by controlling the magnetic field both on the xx and yy directions (or with one external field, but in the rotating wave approximation). As byproduct we prove that the qualitative shape of the time optimal synthesis presents different patterns, that cyclically alternate as M/E0M/E\to0, giving a partial proof of a conjecture formulated in a previous paper.Comment: 31 pages, 10 figures, typos correcte

    Guidance, flight mechanics and trajectory optimization. Volume 4 - The calculus of variations and modern applications

    Get PDF
    Guidance, flight mechanics, and trajectory optimization - calculus of variations and modern application

    Integral and measure-turnpike properties for infinite-dimensional optimal control systems

    Get PDF
    We first derive a general integral-turnpike property around a set for infinite-dimensional non-autonomous optimal control problems with any possible terminal state constraints, under some appropriate assumptions. Roughly speaking, the integral-turnpike property means that the time average of the distance from any optimal trajectory to the turnpike set con- verges to zero, as the time horizon tends to infinity. Then, we establish the measure-turnpike property for strictly dissipative optimal control systems, with state and control constraints. The measure-turnpike property, which is slightly stronger than the integral-turnpike property, means that any optimal (state and control) solution remains essentially, along the time frame, close to an optimal solution of an associated static optimal control problem, except along a subset of times that is of small relative Lebesgue measure as the time horizon is large. Next, we prove that strict strong duality, which is a classical notion in optimization, implies strict dissipativity, and measure-turnpike. Finally, we conclude the paper with several comments and open problems
    corecore