29,740 research outputs found

    Neumann Boundary Control of Hyperbolic Equations with Pointwise State Constraints

    Get PDF
    We consider optimal control problems for hyperbolic systems with controls in Neumann boundary conditions with pointwise (hard) constraints on control and state functions. Focusing on hyperbolic dynamics governed by the multidimensional wave equation with a nonlinear term, we derive new necessary optimality conditions in the pointwise form of the Pontryagin Maximum Principle for the state-constrained problem under consideration. Our approach is based on modern methods of variational analysis that allows us to obtain refined necessary optimality conditions with no convexity assumptions on integrands in the minimizing cost functional

    Primal-dual extragradient methods for nonlinear nonsmooth PDE-constrained optimization

    Get PDF
    We study the extension of the Chambolle--Pock primal-dual algorithm to nonsmooth optimization problems involving nonlinear operators between function spaces. Local convergence is shown under technical conditions including metric regularity of the corresponding primal-dual optimality conditions. We also show convergence for a Nesterov-type accelerated variant provided one part of the functional is strongly convex. We show the applicability of the accelerated algorithm to examples of inverse problems with L1L^1- and LL^\infty-fitting terms as well as of state-constrained optimal control problems, where convergence can be guaranteed after introducing an (arbitrary small, still nonsmooth) Moreau--Yosida regularization. This is verified in numerical examples

    Sufficient Conditions for Optimal Control Problems with Terminal Constraints and Free Terminal Times with Applications to Aerospace

    Get PDF
    Motivated by the flight control problem of designing control laws for a Ground Collision Avoidance System (GCAS), this thesis formulates sufficient conditions for a strong local minimum for a terminally constrained optimal control problem with a free-terminal time. The conditions develop within the framework of a construction of a field of extremals by means of the method of characteristics, a procedure for the solution of first-order linear partial differential equations, but modified to apply to the Hamilton-Jacobi-Bellman equation of optimal control. Additionally, the thesis constructs these sufficient conditions for optimality with a mathematically rigorous development. The proof uses an approach which generalizes and differs significantly from procedures outlined in the classical literature on control engineering, where similar formulas are derived, but only in a cursory, formal and sometimes incomplete way. Additionally, the thesis gives new arrangements of the relevant expressions arising in the formulation of sufficient conditions for optimality that lead to more concise formulas for the resulting perturbation feedback control schemes. These results are applied to an emergency perturbation-feedback guidance scheme which recovers an aircraft from a dangerous flight-path angle to a safe one. Discussion of required background material contrasts nonlinear and linear optimal control theory are contrasted in the context of aerospace applications. A simplified version of the classical model for an F-16 fighter aircraft is used in numerical computation to very, by example, that the sufficient conditions for optimality developed in this thesis can be used off-line to detect possible failures in perturbation feedback control schemes, which arise if such methods are applied along extremal controlled trajectories and which only satisfy the necessary conditions for optimality without being locally optimal. The sufficient conditions for optimality developed in this thesis, on the other hand, guarantee the local validity of such perturbation feedback control schemes. This thesis presents various graphs that compare the neighboring extremals which were derived from the perturbation feedback control scheme with optimal ones that start from the same initial condition. Future directions for this work include extending the perturbation feedback control schemes to optimization problems that are further constrained, possibly through control constraints, state-space constraints or mixed state-control constraints

    Successive Convexification of Non-Convex Optimal Control Problems and Its Convergence Properties

    Full text link
    This paper presents an algorithm to solve non-convex optimal control problems, where non-convexity can arise from nonlinear dynamics, and non-convex state and control constraints. This paper assumes that the state and control constraints are already convex or convexified, the proposed algorithm convexifies the nonlinear dynamics, via a linearization, in a successive manner. Thus at each succession, a convex optimal control subproblem is solved. Since the dynamics are linearized and other constraints are convex, after a discretization, the subproblem can be expressed as a finite dimensional convex programming subproblem. Since convex optimization problems can be solved very efficiently, especially with custom solvers, this subproblem can be solved in time-critical applications, such as real-time path planning for autonomous vehicles. Several safe-guarding techniques are incorporated into the algorithm, namely virtual control and trust regions, which add another layer of algorithmic robustness. A convergence analysis is presented in continuous- time setting. By doing so, our convergence results will be independent from any numerical schemes used for discretization. Numerical simulations are performed for an illustrative trajectory optimization example.Comment: Updates: corrected wordings for LICQ. This is the full version. A brief version of this paper is published in 2016 IEEE 55th Conference on Decision and Control (CDC). http://ieeexplore.ieee.org/document/7798816

    Optimal Control of Convective FitzHugh-Nagumo Equation

    Get PDF
    We investigate smooth and sparse optimal control problems for convective FitzHugh-Nagumo equation with travelling wave solutions in moving excitable media. The cost function includes distributed space-time and terminal observations or targets. The state and adjoint equations are discretized in space by symmetric interior point Galerkin (SIPG) method and by backward Euler method in time. Several numerical results are presented for the control of the travelling waves. We also show numerically the validity of the second order optimality conditions for the local solutions of the sparse optimal control problem for vanishing Tikhonov regularization parameter. Further, we estimate the distance between the discrete control and associated local optima numerically by the help of the perturbation method and the smallest eigenvalue of the reduced Hessian
    corecore