15 research outputs found
A Preconditioned Inexact Active-Set Method for Large-Scale Nonlinear Optimal Control Problems
We provide a global convergence proof of the recently proposed sequential
homotopy method with an inexact Krylov--semismooth-Newton method employed as a
local solver. The resulting method constitutes an active-set method in function
space. After discretization, it allows for efficient application of
Krylov-subspace methods. For a certain class of optimal control problems with
PDE constraints, in which the control enters the Lagrangian only linearly, we
propose and analyze an efficient, parallelizable, symmetric positive definite
preconditioner based on a double Schur complement approach. We conclude with
numerical results for a badly conditioned and highly nonlinear benchmark
optimization problem with elliptic partial differential equations and control
bounds. The resulting method is faster than using direct linear algebra for the
2D benchmark and allows for the parallel solution of large 3D problems.Comment: 26 page
Global convergence of a stabilized sequential quadratic semidefinite programming method for nonlinear semidefinite programs without constraint qualifications
In this paper, we propose a new sequential quadratic semidefinite programming
(SQSDP) method for solving nonlinear semidefinite programs (NSDPs), in which we
produce iteration points by solving a sequence of stabilized quadratic
semidefinite programming (QSDP) subproblems, which we derive from the minimax
problem associated with the NSDP. Differently from the existing SQSDP methods,
the proposed one allows us to solve those QSDP subproblems just approximately
so as to ensure global convergence. One more remarkable point of the proposed
method is that any constraint qualifications (CQs) are not required in the
global convergence analysis. Specifically, under some assumptions without CQs,
we prove the global convergence to a point satisfying any of the following: the
stationary conditions for the feasibility problem; the
approximate-Karush-Kuhn-Tucker (AKKT) conditions; the trace-AKKT conditions.
The latter two conditions are the new optimality conditions for the NSDP
presented by Andreani et al. (2018) in place of the Karush-Kuhn-Tucker
conditions. Finally, we conduct some numerical experiments to examine the
efficiency of the proposed method
Optimization of Rail Profiles to Improve Vehicle Running Stability in Switch Panel of High-Speed Railway Turnouts
A method for optimizing rail profiles to improve vehicle running stability in switch panel of high-speed railway turnouts is proposed in this paper. The stock rail profiles are optimized to decrease the rolling radii difference (RRD). Such characteristics are defined through given rail profiles, and the target rolling radii difference is defined as a function of lateral displacements of wheel set. The improved sequential quadratic programming (SQP) method is used to generate a sequence of improving profiles leading to the optimum one. The wheel-rail contact geometry and train-turnout dynamic interaction of the optimized profiles and those of nominal profiles are calculated for comparison. Without lateral displacement of wheel set, the maximum RRD in relation to a nominal profile will be kept within 0.5 mm–1 mm, while that in relation to an optimized profile will be kept within 0.3 mm–0.5 mm. For the facing and trailing move of vehicle passing the switch panel in the through route, the lateral wheel-rail contact force is decreased by 34.0% and 29.9%, respectively, the lateral acceleration of car body is decreased by 41.9% and 40.7%, respectively, and the optimized profile will not greatly influence the vertical wheel-rail contact force. The proposed method works efficiently and the results prove to be quite reasonable
Méthodes sans factorisation pour l’optimisation non linéaire
RÉSUMÉ : Cette thèse a pour objectif de formuler mathématiquement, d'analyser et d'implémenter deux méthodes sans factorisation pour l'optimisation non linéaire. Dans les problèmes de grande taille, la jacobienne des contraintes n'est souvent pas disponible sous forme de matrice; seules son action et celle de sa transposée sur un vecteur le sont. L'optimisation sans factorisation consiste alors à utiliser des opérateurs linéaires abstraits représentant la jacobienne ou le hessien. De ce fait, seules les actions > sont autorisées et l'algèbre linéaire directe doit être remplacée par des méthodes itératives. Outre ces restrictions, une grande difficulté lors de l'introduction de méthodes sans factorisation dans des algorithmes d'optimisation concerne le contrôle de l'inexactitude de la résolution des systèmes linéaires. Il faut en effet s'assurer que la direction calculée est suffisamment précise pour garantir la convergence de l'algorithme concerné. En premier lieu, nous décrivons l'implémentation sans factorisation d'une méthode de lagrangien augmenté pouvant utiliser des approximations quasi-Newton des dérivées secondes. Nous montrons aussi que notre approche parvient à résoudre des problèmes d'optimisation de structure avec des milliers de variables et contraintes alors que les méthodes avec factorisation échouent. Afin d'obtenir une méthode possédant une convergence plus rapide, nous présentons ensuite un algorithme qui utilise un lagrangien augmenté proximal comme fonction de mérite et qui, asymptotiquement, se transforme en une méthode de programmation quadratique séquentielle stabilisée. L'utilisation d'approximations BFGS à mémoire limitée du hessien du lagrangien conduit à l'obtention de systèmes linéaires symétriques quasi-définis. Ceux-ci sont interprétés comme étant les conditions d'optimalité d'un problème aux moindres carrés linéaire, qui est résolu de manière inexacte par une méthode de Krylov. L'inexactitude de cette résolution est contrôlée par un critère d'arrêt facile à mettre en œuvre. Des tests numériques démontrent l'efficacité et la robustesse de notre méthode, qui se compare très favorablement à IPOPT, en particulier pour les problèmes dégénérés pour lesquels la LICQ n'est pas respectée à la solution ou lors de la minimisation. Finalement, l'écosystème de développement d'algorithmes d'optimisation en Python, baptisé NLP.py, est exposé. Cet environnement s'adresse aussi bien aux chercheurs en optimisation qu'aux étudiants désireux de découvrir ou d'approfondir l'optimisation. NLP.py donne accès à un ensemble de blocs constituant les éléments les plus importants des méthodes d'optimisation continue. Grâce à ceux-ci, le chercheur est en mesure d'implémenter son algorithme en se concentrant sur la logique de celui-ci plutôt que sur les subtilités techniques de son implémentation.----------ABSTRACT : This thesis focuses on the mathematical formulation, analysis and implementation of two factorization-free methods for nonlinear constrained optimization. In large-scale optimization, the Jacobian of the constraints may not be available in matrix form; only its action and that of its transpose on a vector are. Factorization-free optimization employs abstract linear operators representing the Jacobian or Hessian matrices. Therefore, only operator-vector products are allowed and direct linear algebra is replaced by iterative methods. Besides these implementation restrictions, a difficulty inherent to methods without factorization in optimization algorithms is the control of the inaccuracy in linear system solves. Indeed, we have to guarantee that the direction calculated is sufficiently accurate to ensure convergence. We first describe a factorization-free implementation of a classical augmented Lagrangian method that may use quasi-Newton second derivatives approximations. This method is applied to problems with thousands of variables and constraints coming from aircraft structural design optimization, for which methods based on factorizations fail. Results show that it is a viable approach for these problems. In order to obtain a method with a faster convergence rate, we present an algorithm that uses a proximal augmented Lagrangian as merit function and that asymptotically turns in a stabilized sequential quadratic programming method. The use of limited-memory BFGS approximations of the Hessian of the Lagrangian combined with regularization of the constraints leads to symmetric quasi-definite linear systems. Because such systems may be interpreted as the KKT conditions of linear least-squares problems, they can be efficiently solved using an appropriate Krylov method. Inaccuracy of their solutions is controlled by a stopping criterion which is easy to implement. Numerical tests demonstrate the effectiveness and robustness of our method, which compares very favorably with IPOPT, especially for degenerate problems for which LICQ is not satisfied at the optimal solution or during the minimization process. Finally, an ecosystem for optimization algorithm development in Python, code-named NLP.py, is exposed. This environment is aimed at researchers in optimization and students eager to discover or strengthen their knowledge in optimization. NLP.py provides access to a set of building blocks constituting the most important elements of continuous optimization methods. With these blocks, users are able to implement their own algorithm focusing on the logic of the algorithm rather than on the technicalities of its implementation
Propriedades de convergência de um método PQS estabilizado para problemas matemáticos com condições de equilÃbrio
Orientador: Prof. Dr. Ademir Alves RibeiroCoorientador: Prof. Dr. José Alberto Ramos FlorTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Matemática. Defesa : Curitiba, 29/01/2019Inclui referênciasResumo: Problemas de Otimização com Condições de EquilÃbrio (MPEC) possuem a particularidade de não satisfazer as condições de qualificação usuais para problemas de otimização não linear. Isto representa uma dificuldade ao tentar resolver problemas MPEC com estes métodos. Recentemente, considerando a condição de qualificação MPEC-LICQ, uma adaptação para problemas MPEC da condição de qualificação de independência linear (LICQ) usual, Izmailov, Solodov e Uskov provaram que métodos baseados em Lagrangiano Aumentado de primeira ordem, convergem a pontos C-estacionários, que são mais fracos que pontos KKT. Posteriormente Andreani, Secchin e Silva melhoraram o resultado, mostrando que quando se consideram métodos baseados em Lagrangiano Aumentado de segunda ordem pode-se garantir convergência a pontos pelo menos M-estacionários, condição mais forte que C-estacionariedade, porém também mais fraca que KKT. Além disso mostraram também que considerando a condição MPECRCPLD, mais fraca que MPEC-LICQ, e que certa sequência dos multiplicadores é limitada tem-se convergência a pontos S-estacionários, que são equivalentes a KKT. Neste trabalho mostramos que estes resultados não são exclusivos do método de Lagrangiano Aumentado. Apresentamos um método baseado em PQS estabilizada de segunda ordem que pode ser aplicado a problemas MPEC, obtendo resultados equivalentes. Assim, mostramos que quando se considera MPEC-LICQ, o método PQS estabilizado também garante convergência a pontos M-estacionérios. Além disso mostramos que considerando MPEC-RCPLD e uma propriedade de limitação vinculada ao multiplicador da restrição de complementaridade também temos convergência a pontos S-estacionários. Testes numéricos são feitos para validar os resultados teóricos. Palavras-chave: Problemas de Otimização com Condições de EquilÃbrio. Programação não linear. Otimização com restrições. Programação Quadrática Sequencial Estabilizada. M-estacionariedade.Abstract: Mathematical Programs with Equilibrium Constraints (MPEC) have the particularity do not satisfy the usual constraint qualifications for standard nonlinear optimization. This represents a difficulty in attempting to solve MPEC problems with the usual nonlinear optimization methods. Recently, considering the MPEC-LICQ constraint qualification, an adaptation to MPEC problems of the usual linear independence constraint qualification (LICQ), Izmailov, Solodov and Uskov proved that first order augmented Lagrangian methods converge to C-stationary points, which are weaker than KKT points. Later on Andreani, Secchin and Silva improved this result, showing that when considering second order augmented Lagrangian methods, it can be guaranteed convergence to at least M-stationary points, which are stronger than C-stationary, but still weaker than KKT. In addition, they also showed that considering the MPEC-RCPLD constraint qualification, which is weaker than MPEC-LICQ, and assuming that a certain multiplier sequence is bounded, it can be proved the convergence to S-stationary points, which are equivalent to KKT. In this work we show that these results are not exclusive of the Augmented Lagrangian method. We provide a method based on second order stabilized Sequential Quadratic Programming, which can be applied to MPEC problems, achieving equivalent results. Thus we show that when MPEC-LICQ is considered, the stabilized SQP method also guarantees convergence to M-stationary points. Moreover, we show that considering MPEC-RCPLD and a boundedness property related to the complementarity constraint multiplier, the method also has convergence to S-stationary points. Numerical tests were performed to validate the theoretical results. Keywords: Mathematical Programs with Equilibrium Constraints. Nonlinear programming. Constrained optimization. stabilized Sequentia
Recommended from our members
Second-Derivative SQP Methods for Large-Scale Nonconvex Optimization
The class of stabilized sequential quadratic programming (SQP) methods for nonlinearly constrained optimization solve a sequence of related quadratic programming (QP) subproblems formed from a two-norm penalized quadratic model of the Lagrangian function subject to shifted, linearized constraints. While these methods have been shown to exhibit superlinear local convergence even when the constraint Jacobian is rank deficient at the solution, they generally have no global convergence theory. To address this issue, primal-dual SQP methods (pdSQP) employ a certain primal-dual augmented Lagrangian merit function and solve a subproblem that involves the minimization of a quadratic model of the merit function subject to simple bound constraints. The model of the merit function is constructed so that the resulting primal-dual subproblem is equivalent to the stabilized SQP subproblem. When used in conjunction with a flexible line-search, the merit function guarantees convergence from any starting point, while the connection with the stabilized subproblem allows pdSQP to retain the superlinear local convergence that is characteristic of stabilized SQP methods.A new dynamic convexification framework is developed that is applicable for nonconvex general standard form, stabilized, and primal-dual bound-constrained QP subproblems. Dynamic convexification involves three distinct stages: pre-convexification, concurrent convexification and post-convexification. New techniques are derived and analyzed for the implicit modification of symmetric indefinite factorizations and for the imposition of temporary artificial constraints, both of which are suitable for pre-convexification. Concurrent convexification works synchronously with the active-set method used to solve the subproblem, and computes minimal modifications needed to ensure that the QP iterates are uniformly bounded. Finally, post-convexification defines an implicit modification that ensures the solution of the subproblem yields a descent direction for the merit function.A new exact second-derivative primal-dual SQP method (dcpdSQP) is formulated for large-scale nonconvex optimization. Convergence analysis is presented that demonstrates guaranteed global convergence. Extensive numerical testing indicates that the performance of the proposed method is comparable or better than conventional full convexification while significantly reducing the number of factorizations required
Recommended from our members
A globally convergent stabilized sqp method
Sequential quadratic programming (SQP) methods are a popular class of methods for nonlinearly constrained optimization. They are particularly effective for solving a sequence of related problems, such as those arising in mixed-integer nonlinear programming and the optimization of functions subject to differential equation constraints. Recently, there has been considerable interest in the formulation of stabilized SQP methods, which are specifically designed to handle degenerate optimization problems. Existing stabilized SQP methods are essentially local in the sense that both the formulation and analysis focus on the properties of the methods in a neighborhood of a solution. A new SQP method is proposed that has favorable global convergence properties yet, under suitable assumptions, is equivalent to a variant of the conventional stabilized SQP method in the neighborhood of a solution. The method combines a primal-dual generalized augmented Lagrangian function with a flexible line search to obtain a sequence of improving estimates of the solution. The method incorporates a convexification algorithm that allows the use of exact second derivatives to define a convex quadratic programming (QP) subproblem without requiring that the Hessian of the Lagrangian be positive definite in the neighborhood of a solution. This gives the potential for fast convergence in the neighborhood of a solution. Additional benefits of the method are that each QP subproblem is regularized and the QP subproblem always has a known feasible point. Numerical experiments are presented for a subset of the problems from the CUTEr test collection. © 2013 Society for Industrial and Applied Mathematics