5 research outputs found

    Méthodes sans factorisation pour l’optimisation non linéaire

    Get PDF
    RÉSUMÉ : Cette thèse a pour objectif de formuler mathématiquement, d'analyser et d'implémenter deux méthodes sans factorisation pour l'optimisation non linéaire. Dans les problèmes de grande taille, la jacobienne des contraintes n'est souvent pas disponible sous forme de matrice; seules son action et celle de sa transposée sur un vecteur le sont. L'optimisation sans factorisation consiste alors à utiliser des opérateurs linéaires abstraits représentant la jacobienne ou le hessien. De ce fait, seules les actions > sont autorisées et l'algèbre linéaire directe doit être remplacée par des méthodes itératives. Outre ces restrictions, une grande difficulté lors de l'introduction de méthodes sans factorisation dans des algorithmes d'optimisation concerne le contrôle de l'inexactitude de la résolution des systèmes linéaires. Il faut en effet s'assurer que la direction calculée est suffisamment précise pour garantir la convergence de l'algorithme concerné. En premier lieu, nous décrivons l'implémentation sans factorisation d'une méthode de lagrangien augmenté pouvant utiliser des approximations quasi-Newton des dérivées secondes. Nous montrons aussi que notre approche parvient à résoudre des problèmes d'optimisation de structure avec des milliers de variables et contraintes alors que les méthodes avec factorisation échouent. Afin d'obtenir une méthode possédant une convergence plus rapide, nous présentons ensuite un algorithme qui utilise un lagrangien augmenté proximal comme fonction de mérite et qui, asymptotiquement, se transforme en une méthode de programmation quadratique séquentielle stabilisée. L'utilisation d'approximations BFGS à mémoire limitée du hessien du lagrangien conduit à l'obtention de systèmes linéaires symétriques quasi-définis. Ceux-ci sont interprétés comme étant les conditions d'optimalité d'un problème aux moindres carrés linéaire, qui est résolu de manière inexacte par une méthode de Krylov. L'inexactitude de cette résolution est contrôlée par un critère d'arrêt facile à mettre en œuvre. Des tests numériques démontrent l'efficacité et la robustesse de notre méthode, qui se compare très favorablement à IPOPT, en particulier pour les problèmes dégénérés pour lesquels la LICQ n'est pas respectée à la solution ou lors de la minimisation. Finalement, l'écosystème de développement d'algorithmes d'optimisation en Python, baptisé NLP.py, est exposé. Cet environnement s'adresse aussi bien aux chercheurs en optimisation qu'aux étudiants désireux de découvrir ou d'approfondir l'optimisation. NLP.py donne accès à un ensemble de blocs constituant les éléments les plus importants des méthodes d'optimisation continue. Grâce à ceux-ci, le chercheur est en mesure d'implémenter son algorithme en se concentrant sur la logique de celui-ci plutôt que sur les subtilités techniques de son implémentation.----------ABSTRACT : This thesis focuses on the mathematical formulation, analysis and implementation of two factorization-free methods for nonlinear constrained optimization. In large-scale optimization, the Jacobian of the constraints may not be available in matrix form; only its action and that of its transpose on a vector are. Factorization-free optimization employs abstract linear operators representing the Jacobian or Hessian matrices. Therefore, only operator-vector products are allowed and direct linear algebra is replaced by iterative methods. Besides these implementation restrictions, a difficulty inherent to methods without factorization in optimization algorithms is the control of the inaccuracy in linear system solves. Indeed, we have to guarantee that the direction calculated is sufficiently accurate to ensure convergence. We first describe a factorization-free implementation of a classical augmented Lagrangian method that may use quasi-Newton second derivatives approximations. This method is applied to problems with thousands of variables and constraints coming from aircraft structural design optimization, for which methods based on factorizations fail. Results show that it is a viable approach for these problems. In order to obtain a method with a faster convergence rate, we present an algorithm that uses a proximal augmented Lagrangian as merit function and that asymptotically turns in a stabilized sequential quadratic programming method. The use of limited-memory BFGS approximations of the Hessian of the Lagrangian combined with regularization of the constraints leads to symmetric quasi-definite linear systems. Because such systems may be interpreted as the KKT conditions of linear least-squares problems, they can be efficiently solved using an appropriate Krylov method. Inaccuracy of their solutions is controlled by a stopping criterion which is easy to implement. Numerical tests demonstrate the effectiveness and robustness of our method, which compares very favorably with IPOPT, especially for degenerate problems for which LICQ is not satisfied at the optimal solution or during the minimization process. Finally, an ecosystem for optimization algorithm development in Python, code-named NLP.py, is exposed. This environment is aimed at researchers in optimization and students eager to discover or strengthen their knowledge in optimization. NLP.py provides access to a set of building blocks constituting the most important elements of continuous optimization methods. With these blocks, users are able to implement their own algorithm focusing on the logic of the algorithm rather than on the technicalities of its implementation

    A quasi-Newton strategy for the sSQP method for variational inequality and optimization problems

    No full text
    The quasi-Newton strategy presented in this paper preserves one of the most important features of the stabilized Sequential Quadratic Programming (sSQP) method, the local convergence without constraint qualifications assumptions. It is known that the primal-dual sequence converges quadratically assuming only the second-order sufficient condition. In this work, we show that if the matrices are updated by performing a minimization of a Bregman distance (which includes the classic updates), the quasi-Newton version of the method converges superlinearly without introducing further assumptions. Also, we show that even for an unbounded Lagrange multiplier set, the generated matrices satisfies a bounded deterioration property and the Dennis-Moré condition.Fil: Fernández Ferreyra, Damián Roberto. Universidad Nacional de Cordoba. Facultad de Matematica, Astronomia y Fisica. Seccion Matematica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Córdoba; Argentin

    On The Behaviour Of Constrained Optimization Methods When Lagrange Multipliers Do Not Exist

    No full text
    Sequential optimality conditions are related to stopping criteria for nonlinear programming algorithms. Local minimizers of continuous optimization problems satisfy these conditions without constraint qualifications. It is interesting to discover whether well-known optimization algorithms generate primal-dual sequences that allow one to detect that a sequential optimality condition holds. When this is the case, the algorithm stops with a correct diagnostic of success (convergence). Otherwise, closeness to a minimizer is not detected and the algorithm ignores that a satisfactory solution has been found. In this paper it will be shown that a straightforward version of the Newton-Lagrange (sequential quadratic programming) method fails to generate iterates for which a sequential optimality condition is satisfied. On the other hand, a Newtonian penalty-barrier Lagrangian method guarantees that the appropriate stopping criterion eventually holds. © 2013 © 2013 Taylor & Francis.293646657Andreani, R., Martínez, J.M., Schuverdt, M.L., On the relation between the constant positive linear dependence condition and quasinormality constraint qualification (2005) J. Optim. Theory Appl, 125, pp. 473-485. , doi: 10.1007/s10957-004-1861-9Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L., On augmented Lagrangian methods with general lower-level constraints (2007) SIAM J. Optim, 18, pp. 1286-1309. , doi: 10.1137/060654797Andreani, R., Martínez, J.M., Svaiter, B.F., A new sequential optimality condition for constrained optimization and algorithmic consequences (2010) SIAM J. Optim, 20, pp. 3533-3554. , doi: 10.1137/090777189Andreani, R., Haeser, G., Martínez, J.M., On sequential optimality conditions for smooth constrained optimization (2011) Optimization, 60, pp. 627-641. , doi: 10.1080/02331930903578700Andreani, R., Haeser, G., Schuverdt, M.L., Silva, P.J.S., Two new weak constraint qualifications and applications (2012) SIAM J. Optim, 22, pp. 1109-1135. , doi: 10.1137/110843939Andreani, R., Haeser, G., Schuverdt, M.L., Silva, P.J.S., A relaxed constant positive linear dependence constraint qualification and applications (2012) Math. Program, 135, pp. 255-273. , doi: 10.1007/s10107-011-0456-0Arutyunov, A.V., (2000) Optimality Conditions - Abnormal and Degenerate Problems, , Kluwer, DordrechtBartholomew-Biggs, M.C., Recursive quadratic programming methods based on the augmented Lagrangian (1987) Math. Program. Study, 31, pp. 21-41. , doi: 10.1007/BFb0121177Benson, H.Y., Shanno, D.F., Vanderbei, R.J., Interior-point methods for nonconvex nonlinear programming - Filter methods and merit functions (2002) Comput. Optim. Appl, 23, pp. 257-272. , doi: 10.1023/A:1020533003783Bertsekas, D.P., (1999) Nonlinear Programming, , Athena Scientific, Belmont, MABielschowsky, R.H., Gomes, F.A.M., Dynamic control of infeasibility in equality constrained optimization (2008) SIAM J. Optim, 19, pp. 1299-1325. , doi: 10.1137/070679557Byrd, R.H., Gilbert, J.Ch., Nocedal, J., A trust region method based on interior point techniques for nonlinear programming (2000) Math. Program, 89, pp. 149-185. , doi: 10.1007/PL00011391Byrd, R.H., Nocedal, J., Waltz, R.A., KNITRO - An integrated package for nonlinear optimization (2006) Large-Scale Nonlinear Optimization, pp. 35-59. , in, G. Di Pillo and M. Roma, eds. Springer, New YorkConn, A.R., Gould, N.I.M., Toint, Ph.L., (2000) Trust Region Methods, , MPS/SIAM Series on Optimization, SIAM, Philadelphia, PAContesse-Becker, L., Extended convergence results for the method of multipliers for non-strictly binding inequality constraints (1993) J. Optim. Theory Appl, 79, pp. 273-310. , doi: 10.1007/BF00940582Di Pillo, G., Lucidi, S., An augmented Lagrangian function with improved exactness properties (2001) SIAM J. Optim, 12, pp. 376-406. , doi: 10.1137/S1052623497321894Di Pillo, G., Liuzzi, G., Lucidi, S., Palagi, L., An exact augmented Lagrangian function for nonlinear programming with two-sided constraints (2003) Comput. Optim. Appl, 25, pp. 57-83. , doi: 10.1023/A:1022948903451Di Pillo, G., Liuzzi, G., Lucidi, S., Palagi, L., A truncated Newton method in an augmented Lagrangian framework for nonlinear programming (2010) Comput. Optim. Appl, 45, pp. 311-352. , doi: 10.1007/s10589-008-9216-3Di Pillo, G., Liuzzi, G., Lucidi, S., An exact penalty-Lagrangian approach for large-scale nonlinear programming (2011) Optimization, 60, pp. 223-252. , doi: 10.1080/02331934.2010.505964Fan, J.Y., Yuan, Y.X., On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption (2005) Computing, 34, pp. 23-39. , doi: 10.1007/s00607-004-0083-1Fernández, D., A quasi-Newton strategy for the sSQP method for variational inequality and optimization problems (2011) Math. Program, pp. 199-223. , doi: 10.1007/s10107-011-0493-8Fernández, D., Solodov, M., Stabilized sequential quadratic programming for optimization and a stabilized Newton-Type method for variational problems (2010) Math. Program, 125, pp. 47-73. , doi: 10.1007/s10107-008-0255-4Fletcher, R., (1987) Practical Methods of Optimization, , Academic Press, LondonFletcher, R., Leyffer, S., Toint, Ph.L., On the global convergence of a filter-SQP algorithm (2002) SIAM J. Optim, 13, pp. 44-59. , doi: 10.1137/S105262340038081XFletcher, R., Gould, N.I.M., Leyffer, S., Toint, Ph.L., Wächter, A., Global convergence of trust-region SQP-filter algorithms for general nonlinear programming (2002) SIAM J. Optim, 13, pp. 635-659. , doi: 10.1137/S1052623499357258Giannessi, F., (2005) Separation of Sets and Optimality Conditions, , Springer, New YorkGould, N.I.M., Toint, Ph.L., Nonlinear programming without a penalty function or a filter (2010) Math. Program, 122, pp. 155-196. , doi: 10.1007/s10107-008-0244-7Gratton, S., Mouffe, M., Toint, Ph.L., Stopping rules and backward error analysis for bound-constrained optimization (2011) Numer. Math, 119, pp. 163-187. , doi: 10.1007/s00211-011-0376-1Izmailov, A., Solodov, M., Stabilized SQP revisited (2010) Math. Program, pp. 93-120. , doi: 10.1007/s10107-010-0413-3Liu, X.W., Yuan, Y.X., A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties (2010) Math. Program, 125, pp. 163-193. , doi: 10.1007/s10107-009-0272-yLiu, X.W., Yuan, Y.X., A sequential quadratic programming method without a penalty function or a filter for nonlinear equality constrained optimization (2011) SIAM J. Optim, 21, pp. 545-571. , doi: 10.1137/080739884Luksan, L., Matonoha, C., Vlcek, J., Interior point methods for large-scale nonlinear programming (2003) Optim. Methods Softw, 20, pp. 569-582. , doi: 10.1080/10556780500140508Luksan, L., Matonoha, C., Vlcek, J., Algorithm 896: LSA: Algorithms for large-scale optimization (2009) ACM Trans. Math. Softw, 36, pp. 161-1629. , doi: 10.1145/1527286.1527290Martínez, J.M., Santos, L.T., Some new theoretical results on recursive quadratic programming algorithms (1998) J. Optim. Theory Appl, 97, pp. 435-454. , doi: 10.1023/A:1022686919295Martínez, J.M., Svaiter, B.F., A practical optimality condition without constraint qualifications for nonlinear programming (2003) J. Optim. Theory Appl, 118, pp. 117-133. , doi: 10.1023/A:1024791525441Nocedal, J., Wright, S.J., (1999) Numerical Optimization, , Springer, New YorkQi, L., Wei, Z., On the constant positive linear dependence condition and its application to SQP methods (2000) SIAM J. Optim, 10, pp. 963-981. , doi: 10.1137/S1052623497326629Schiela, A., Guenther, A., An interior point algorithm with inexact step computation in function space for state constrained optimal control (2011) Numer. Math, 119, pp. 373-407. , doi: 10.1007/s00211-011-0381-4Shen, C., Xue, W., Pu, D., A filter SQP algorithm without a feasibility restoration phase (2009) Comput. Appl. Math, 28, pp. 167-194. , doi: 10.1590/S1807-03022009000200003Shen, C., Leyffer, S., Fletcher, R., Nonmonotone filter method for nonlinear optimization (2012) Comput. Optim. Appl, 52, pp. 583-607. , doi: 10.1007/s10589-011-9430-2Wächter, A., Biegler, L.T., On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming (2006) Math. Program, 106, pp. 25-57. , doi: 10.1007/s10107-004-0559-yWright, S.J., Superlinear convergence of a stabilized SQP method to a degenerate solution (1998) Comput. Optim. Appl, 11, pp. 253-275. , doi: 10.1023/A:1018665102534Wright, S.J., Modifying SQP for degenerate problems (2002) SIAM J. Optim, 13, pp. 470-497. , doi: 10.1137/S105262349833373
    corecore