32 research outputs found

    Nondegenerate forms of the maximum principle for optimal control problems with state constraints

    Get PDF
    Tese de Doutoramento em Ciências - Área de Conhecimento MatemáticaThe Maximum Principle (MP) plays an important role in the characterization of solutions to optimal control problems. It typically identifies a small set of candidates where the minimizers belong. However, for some optimal control problems with constraints, it may happen that the MP is unable to provide any useful information; for example, if the set of candidates to minimizers that satisfy a certain MP coincides with the set of all admissible solutions. When this happens, we say that the degeneracy phenomenon occurs. One of ours main goals, is preventing the degeneracy phenomenon to occur by imposing additional terms to the MP. In this context, we developed new strengthened forms of the MP, for optimal control problems and in particular for optimal control problems with higher index state constrains. Another case where the MP is unable to provide any useful information happens when the scalar multiplier associated with the objective function is equal to zero. So, the MP merely states a relation between the constraints and does not use the objective function to select candidates to minimizers. We have also developed strengthened forms of the MP such that the MP can be written with the multiplier associated with the objective function not zero, the so-called normal forms of the MP, for optimal control problems. These two types of strengthened forms of the MP can be applied when the problem satisfies additional hypotheses, known as constraint qualifications, and therefore the constraint qualifications are also object of our study. The nondegenerate forms of MP, that were developed in this thesis, are valid for new types of optimal control problems with state constraints both by addressing problems with less restrictions on its data, and also by developing new constraint qualifications that are verified for more problems or are easier to verify whether they are satisfied.O Princípio do Máximo (PM) tem um papel fundamental na caracterização de soluções de problemas de controlo óptimo. O PM tipicamente identifica um pequeno conjunto de candidatos entre os quais se encontram o(s) óptimos. Contudo, para alguns problemas de controlo óptimo com restrições, o PM poderá não fornecer qualquer informação útil; por exemplo, se o conjunto de candidatos a mínimos que satisfaz o PM coincide com o conjunto de todas as soluções admissíveis. Quando tal acontece, dizemos que o fenómeno de degeneração ocorre. Um dos nossos principais objectivos, é garantir a não ocorrência do fenómeno de degeneração impondo condições adicionais ao PM. Neste contexto, desenvolvemos formas fortalecidas do PM para problemas de controlo óptimo e em particular para problemas de controlo óptimo com restrições de estado de “elevado" índice. Outro caso em que o PM não fornece informação útil, ocorre quando o multiplicador associado à função objectivo é igual a zero. Neste caso o PM é uma mera relação entre as restrições e portanto não usa a função objectiva para seleccionar um conjunto de candidatos a mínimos. Desenvolvemos, também, formas fortalecidas do PM de modo a que possam ser escritas com o multiplicador associado à função objectivo não nulo, denominadas por PM normais, para problemas de controlo óptimo. Estes dois tipos de condições fortalecidas são aplicáveis apenas quando o problema satisfaz hipóteses adicionais, conhecidas como qualificações de restrição, e portanto as qualificações de restrição são também objecto do nosso estudo. As formas não degeneradas do PM, desenvolvidas nesta tese, são válidas para novos tipos de problemas de controlo óptimo com restrições de estado, simultaneamente por permitirem problemas com menos restrições nos dados, e também por desenvolverem qualificações de restrição que são verificadas para um maior número de problemas ou são mais fáceis de verificar.The financial support from Project HPMT-CT-2001-00278 of CTS – Control Traning Site, from projecto "Optimização e Controlo" of FCT-Program and from Projecto FCT POSI/EEA-SRI/61831/2004 "Controlo Óptimo com Restrições e suas Aplicações" are gratefully acknowledged

    No Infimum Gap and Normality in Optimal Impulsive Control Under State Constraints

    Get PDF
    In this paper we consider an impulsive extension of an optimal control problem with unbounded controls, subject to endpoint and state constraints. We show that the existence of an extended-sense minimizer that is a normal extremal for a constrained Maximum Principle ensures that there is no gap between the infima of the original problem and of its extension. Furthermore, we translate such relation into verifiable sufficient conditions for normality in the form of constraint and endpoint qualifications. Links between existence of an infimum gap and normality in impulsive control have previously been explored for problems without state constraints. This paper establishes such links in the presence of state constraints and of an additional ordinary control, for locally Lipschitz continuous data

    On first order state constrained optimal control problems

    Get PDF
    We show that exact penalization techniques canbe applied to optimal control problems with state constraintsunder a hard to verify hypothesis. Investigating conditionsimplying our hypothetical hypothesis we discuss some recenttheoretical results on regularity of multipliers for optimalcontrol problem involving first order state constraints. We showby an example that known conditions asserting regularity ofthe multipliers do not prevent the appearance of atoms in themultiplier measure. Our accompanying example is treated bothnumerically and analytically. Extension to cover problems withadditional mixed state constraints is also discusse

    Necessary and Sufficient Conditions of Optimality for a Damped Hyperbolic Equation in One-Space Dimension

    Get PDF
    The present paper deals with the necessary optimality condition for a class of distributed parameter systems in which the system is modeled in one-space dimension by a hyperbolic partial differential equation subject to the damping and mixed constraints on state and controls. Pontryagin maximum principle is derived to be a necessary condition for the controls of such systems to be optimal. With the aid of some convexity assumptions on the constraint functions, it is obtained that the maximum principle is also a sufficient condition for the optimality

    Numerical Solution of Optimal Control Problems with Explicit and Implicit Switches

    Get PDF
    This dissertation deals with the efficient numerical solution of switched optimal control problems whose dynamics may coincidentally be affected by both explicit and implicit switches. A framework is being developed for this purpose, in which both problem classes are uniformly converted into a mixed–integer optimal control problem with combinatorial constraints. Recent research results relate this problem class to a continuous optimal control problem with vanishing constraints, which in turn represents a considerable subclass of an optimal control problem with equilibrium constraints. In this thesis, this connection forms the foundation for a numerical treatment. We employ numerical algorithms that are based on a direct collocation approach and require, in particular, a highly accurate determination of the switching structure of the original problem. Due to the fact that the switching structure is a priori unknown in general, our approach aims to identify it successively. During this process, a sequence of nonlinear programs, which are derived by applying discretization schemes to optimal control problems, is solved approximatively. After each iteration, the discretization grid is updated according to the currently estimated switching structure. Besides a precise determination of the switching structure, it is of central importance to estimate the global error that occurs when optimal control problems are solved numerically. Again, we focus on certain direct collocation discretization schemes and analyze error contributions of individual discretization intervals. For this purpose, we exploit a relationship between discrete adjoints and the Lagrange multipliers associated with those nonlinear programs that arise from the collocation transcription process. This relationship can be derived with the help of a functional analytic framework and by interrelating collocation methods and Petrov–Galerkin finite element methods. In analogy to the dual-weighted residual methodology for Galerkin methods, which is well–known in the partial differential equation community, we then derive goal–oriented global error estimators. Based on those error estimators, we present mesh refinement strategies that allow for an equilibration and an efficient reduction of the global error. In doing so we note that the grid adaption processes with respect to both switching structure detection and global error reduction get along with each other. This allows us to distill an iterative solution framework. Usually, individual state and control components have the same polynomial degree if they originate from a collocation discretization scheme. Due to the special role which some control components have in the proposed solution framework it is desirable to allow varying polynomial degrees. This results in implementation problems, which can be solved by means of clever structure exploitation techniques and a suitable permutation of variables and equations. The resulting algorithm was developed in parallel to this work and implemented in a software package. The presented methods are implemented and evaluated on the basis of several benchmark problems. Furthermore, their applicability and efficiency is demonstrated. With regard to a future embedding of the described methods in an online optimal control context and the associated real-time requirements, an extension of the well–known multi–level iteration schemes is proposed. This approach is based on the trapezoidal rule and, compared to a full evaluation of the involved Jacobians, it significantly reduces the computational costs in case of sparse data matrices

    Fast numerical methods for robust nonlinear optimal control under uncertainty

    Get PDF
    This thesis treats different aspects of nonlinear optimal control problems under uncertainty in which the uncertain parameters are modeled probabilistically. We apply the polynomial chaos expansion, a well known method for uncertainty quantification, to obtain deterministic surrogate optimal control problems. Their size and complexity pose a computational challenge for traditional optimal control methods. For nonlinear optimal control, this difficulty is increased because a high polynomial expansion order is necessary to derive meaningful statements about the nonlinear and asymmetric uncertainty propagation. To this end, we develop an adaptive optimization strategy which refines the approximation quality separately for each state variable using suitable error estimates. The benefits are twofold: we obtain additional means for solution verification and reduce the computational effort for finding an approximate solution with increased precision. The algorithmic contribution is complemented by a convergence proof showing that the solutions of the optimal control problem after application of the polynomial chaos method approach the correct solution for increasing expansion orders. To obtain a further speed-up in solution time, we develop a structure-exploiting algorithm for the fast derivative generation. The algorithm makes use of the special structure induced by the spectral projection to reuse model derivatives and exploit sparsity information leading to a fast automatic sensitivity generation. This greatly reduces the computational effort of Newton-type methods for the solution of the resulting high-dimensional surrogate problem. Another challenging topic of this thesis are optimal control problems with chance constraints, which form a probabilistic robustification of the solution that is neither too conservative nor underestimates the risk. We develop an efficient method based on the polynomial chaos expansion to compute nonlinear propagations of the reachable sets of all uncertain states and show how it can be used to approximate individual and joint chance constraints. The strength of the obtained estimator in guaranteeing a satisfaction level is supported by providing an a-priori error estimate with exponential convergence in case of sufficiently smooth solutions. All methods developed in this thesis are readily implemented in state-of-the-art direct methods to optimal control. Their performance and suitability for optimal control problems is evaluated in a numerical case study on two nonlinear real-world problems using Monte Carlo simulations to illustrate the effects of the propagated uncertainty on the optimal control solution. As an industrial application, we solve a challenging optimal control problem modeling an adsorption refrigeration system under uncertainty

    An inexact interior-point algorithm for conic convex optimization problems

    Get PDF
    In this dissertation we study an algorithm for convex optimization problems in conic form. (Without loss of generality, any convex problem can be written in conic form.) Our algorithm belongs to the class of interior-point methods (IPMs), which have been associated with many recent theoretical and algorithmic advances in mathematical optimization. In an IPM one solves a family of slowly-varying optimization problems that converge in some sense to the original optimization problem. Each problem in the family depends on a so-called barrier function that is associated with the problem data. Typically IPMs require evaluation of the gradient and Hessian of a suitable (``self-concordant'') barrier function. In some cases such evaluation is expensive; in other cases formulas in closed form for a suitable barrier function and its derivatives are unknown. We show that even if the gradient and Hessian of a suitable barrier function are computed inexactly, the resulting IPM can possess the desirable properties of polynomial iteration complexity and global convergence to the optimal solution set. In practice the best IPMs are primal-dual methods, in which a convex problem is solved together with its dual, which is another convex problem. One downside of existing primal-dual methods is their need for evaluation of a suitable barrier function, or its derivatives, for the dual problem. Such evaluation can be even more difficult than that required for the barrier function associated with the original problem. Our primal-dual IPM does not suffer from this drawback---it does not require exact evaluation, or even estimates, of a suitable barrier function for the dual problem. Given any convex optimization problem, Nesterov and Nemirovski showed that there exists a suitable barrier function, which they called the universal barrier function. Since this function and its derivatives may not be available in closed form, we explain how a Monte Carlo method can be used to estimate the derivatives. We make probabilistic statements regarding the errors in these estimates, and give an upper bound on the minimum Monte Carlo sample size required to ensure that with high probability, our primal-dual IPM possesses polynomial iteration complexity and global convergence
    corecore