81 research outputs found

    Sequential Convex Programming Methods for Solving Nonlinear Optimization Problems with DC constraints

    Full text link
    This paper investigates the relation between sequential convex programming (SCP) as, e.g., defined in [24] and DC (difference of two convex functions) programming. We first present an SCP algorithm for solving nonlinear optimization problems with DC constraints and prove its convergence. Then we combine the proposed algorithm with a relaxation technique to handle inconsistent linearizations. Numerical tests are performed to investigate the behaviour of the class of algorithms.Comment: 18 pages, 1 figur

    Méthodes sans factorisation pour l’optimisation non linéaire

    Get PDF
    RÉSUMÉ : Cette thèse a pour objectif de formuler mathématiquement, d'analyser et d'implémenter deux méthodes sans factorisation pour l'optimisation non linéaire. Dans les problèmes de grande taille, la jacobienne des contraintes n'est souvent pas disponible sous forme de matrice; seules son action et celle de sa transposée sur un vecteur le sont. L'optimisation sans factorisation consiste alors à utiliser des opérateurs linéaires abstraits représentant la jacobienne ou le hessien. De ce fait, seules les actions > sont autorisées et l'algèbre linéaire directe doit être remplacée par des méthodes itératives. Outre ces restrictions, une grande difficulté lors de l'introduction de méthodes sans factorisation dans des algorithmes d'optimisation concerne le contrôle de l'inexactitude de la résolution des systèmes linéaires. Il faut en effet s'assurer que la direction calculée est suffisamment précise pour garantir la convergence de l'algorithme concerné. En premier lieu, nous décrivons l'implémentation sans factorisation d'une méthode de lagrangien augmenté pouvant utiliser des approximations quasi-Newton des dérivées secondes. Nous montrons aussi que notre approche parvient à résoudre des problèmes d'optimisation de structure avec des milliers de variables et contraintes alors que les méthodes avec factorisation échouent. Afin d'obtenir une méthode possédant une convergence plus rapide, nous présentons ensuite un algorithme qui utilise un lagrangien augmenté proximal comme fonction de mérite et qui, asymptotiquement, se transforme en une méthode de programmation quadratique séquentielle stabilisée. L'utilisation d'approximations BFGS à mémoire limitée du hessien du lagrangien conduit à l'obtention de systèmes linéaires symétriques quasi-définis. Ceux-ci sont interprétés comme étant les conditions d'optimalité d'un problème aux moindres carrés linéaire, qui est résolu de manière inexacte par une méthode de Krylov. L'inexactitude de cette résolution est contrôlée par un critère d'arrêt facile à mettre en œuvre. Des tests numériques démontrent l'efficacité et la robustesse de notre méthode, qui se compare très favorablement à IPOPT, en particulier pour les problèmes dégénérés pour lesquels la LICQ n'est pas respectée à la solution ou lors de la minimisation. Finalement, l'écosystème de développement d'algorithmes d'optimisation en Python, baptisé NLP.py, est exposé. Cet environnement s'adresse aussi bien aux chercheurs en optimisation qu'aux étudiants désireux de découvrir ou d'approfondir l'optimisation. NLP.py donne accès à un ensemble de blocs constituant les éléments les plus importants des méthodes d'optimisation continue. Grâce à ceux-ci, le chercheur est en mesure d'implémenter son algorithme en se concentrant sur la logique de celui-ci plutôt que sur les subtilités techniques de son implémentation.----------ABSTRACT : This thesis focuses on the mathematical formulation, analysis and implementation of two factorization-free methods for nonlinear constrained optimization. In large-scale optimization, the Jacobian of the constraints may not be available in matrix form; only its action and that of its transpose on a vector are. Factorization-free optimization employs abstract linear operators representing the Jacobian or Hessian matrices. Therefore, only operator-vector products are allowed and direct linear algebra is replaced by iterative methods. Besides these implementation restrictions, a difficulty inherent to methods without factorization in optimization algorithms is the control of the inaccuracy in linear system solves. Indeed, we have to guarantee that the direction calculated is sufficiently accurate to ensure convergence. We first describe a factorization-free implementation of a classical augmented Lagrangian method that may use quasi-Newton second derivatives approximations. This method is applied to problems with thousands of variables and constraints coming from aircraft structural design optimization, for which methods based on factorizations fail. Results show that it is a viable approach for these problems. In order to obtain a method with a faster convergence rate, we present an algorithm that uses a proximal augmented Lagrangian as merit function and that asymptotically turns in a stabilized sequential quadratic programming method. The use of limited-memory BFGS approximations of the Hessian of the Lagrangian combined with regularization of the constraints leads to symmetric quasi-definite linear systems. Because such systems may be interpreted as the KKT conditions of linear least-squares problems, they can be efficiently solved using an appropriate Krylov method. Inaccuracy of their solutions is controlled by a stopping criterion which is easy to implement. Numerical tests demonstrate the effectiveness and robustness of our method, which compares very favorably with IPOPT, especially for degenerate problems for which LICQ is not satisfied at the optimal solution or during the minimization process. Finally, an ecosystem for optimization algorithm development in Python, code-named NLP.py, is exposed. This environment is aimed at researchers in optimization and students eager to discover or strengthen their knowledge in optimization. NLP.py provides access to a set of building blocks constituting the most important elements of continuous optimization methods. With these blocks, users are able to implement their own algorithm focusing on the logic of the algorithm rather than on the technicalities of its implementation

    Distributed Optimization with Application to Power Systems and Control

    Get PDF
    In many engineering domains, systems are composed of partially independent subsystems—power systems are composed of distribution and transmission systems, teams of robots are composed of individual robots, and chemical process systems are composed of vessels, heat exchangers and reactors. Often, these subsystems should reach a common goal such as satisfying a power demand with minimum cost, flying in a formation, or reaching an optimal set-point. At the same time, limited information exchange is desirable—for confidentiality reasons but also due to communication constraints. Moreover, a fast and reliable decision process is key as applications might be safety-critical. Mathematical optimization techniques are among the most successful tools for controlling systems optimally with feasibility guarantees. Yet, they are often centralized—all data has to be collected in one central and computationally powerful entity. Methods from distributed optimization control the subsystems in a distributed or decentralized fashion, reducing or avoiding central coordination. These methods have a long and successful history. Classical distributed optimization algorithms, however, are typically designed for convex problems. Hence, they are only partially applicable in the above domains since many of them lead to optimization problems with non-convex constraints. This thesis develops one of the first frameworks for distributed and decentralized optimization with non-convex constraints. Based on the Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) algorithm, a bi-level distributed ALADIN framework is presented, solving the coordination step of ALADIN in a decentralized fashion. This framework can handle various decentralized inner algorithms, two of which we develop here: a decentralized variant of the Alternating Direction Method of Multipliers (ADMM) and a novel decentralized Conjugate Gradient algorithm. Decentralized conjugate gradient is to the best of our knowledge the first decentralized algorithm with a guarantee of convergence to the exact solution in a finite number of iterates. Sufficient conditions for fast local convergence of bi-level ALADIN are derived. Bi-level ALADIN strongly reduces the communication and coordination effort of ALADIN and preserves its fast convergence guarantees. We illustrate these properties on challenging problems from power systems and control, and compare performance to the widely used ADMM. The developed methods are implemented in the open-source MATLAB toolbox ALADIN-—one of the first toolboxes for decentralized non-convex optimization. ALADIN- comes with a rich set of application examples from different domains showing its broad applicability. As an additional contribution, this thesis provides new insights why state-of-the-art distributed algorithms might encounter issues for constrained problems

    Constrained optimization in seismic reflection tomography: a Gauss-Newton augmented Lagrangian approach

    Get PDF
    International audienceS U M M A R Y Seismic reflection tomography is a method for determining a subsurface velocity model from the traveltimes of seismic waves reflecting on geological interfaces. From an optimization viewpoint , the problem consists in minimizing a non-linear least-squares function measuring the mismatch between observed traveltimes and those calculated by ray tracing in this model. The introduction of a priori information on the model is crucial to reduce the under-determination. The contribution of this paper is to introduce a technique able to take into account geological a priori information in the reflection tomography problem expressed as inequality constraints in the optimization problem. This technique is based on a Gauss-Newton (GN) sequential quadratic programming approach. At each GN step, a solution to a convex quadratic optimization problem subject to linear constraints is computed thanks to an augmented Lagrangian algorithm. Our choice for this optimization method is motivated and its original aspects are described. First applications on real data sets are presented to illustrate the potential of the approach in practical use of reflection tomography

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Optimisation for image processing

    Get PDF
    The main purpose of optimisation in image processing is to compensate for missing, corrupted image data, or to find good correspondences between input images. We note that image data essentially has infinite dimensionality that needs to be discretised at certain levels of resolution. Most image processing methods find a suboptimal solution, given the characteristics of the problem. While the general optimisation literature is vast, there does not seem to be an accepted universal method for all image problems. In this thesis, we consider three interrelated optimisation approaches to exploit problem structures of various relaxations to three common image processing problems: 1. The first approach to the image registration problem is based on the nonlinear programming model. Image registration is an ill-posed problem and suffers from many undesired local optima. In order to remove these unwanted solutions, certain regularisers or constraints are needed. In this thesis, prior knowledge of rigid structures of the images is included in the problem using linear and bilinear constraints. The aim is to match two images while maintaining the rigid structure of certain parts of the images. A sequential quadratic programming algorithm is used, employing dimensional reduction, to solve the resulting discretised constrained optimisation problem. We show that pre-processing of the constraints can reduce problem dimensionality. Experimental results demonstrate better performance of our proposed algorithm compare to the current methods. 2. The second approach is based on discrete Markov Random Fields (MRF). MRF has been successfully used in machine learning, artificial intelligence, image processing, including the image registration problem. In the discrete MRF model, the domain of the image problem is fixed (relaxed) to a certain range. Therefore, the optimal solution to the relaxed problem could be found in the predefined domain. The original discrete MRF is NP hard and relaxations are needed to obtain a suboptimal solution in polynomial time. One popular approach is the linear programming (LP) relaxation. However, the LP relaxation of MRF (LP-MRF) is excessively high dimensional and contains sophisticated constraints. Therefore, even one iteration of a standard LP solver (e.g. interior-point algorithm), may take too long to terminate. Dual decomposition technique has been used to formulate a convex-nondifferentiable dual LP-MRF that has geometrical advantages. This has led to the development of first order methods that take into account the MRF structure. The methods considered in this thesis for solving the dual LP-MRF are the projected subgradient and mirror descent using nonlinear weighted distance functions. An analysis of the convergence properties of the method is provided, along with improved convergence rate estimates. The experiments on synthetic data and an image segmentation problem show promising results. 3. The third approach employs a hierarchy of problem's models for computing the search directions. The first two approaches are specialised methods for image problems at a certain level of discretisation. As input images are infinite-dimensional, all computational methods require their discretisation at some levels. Clearly, high resolution images carry more information but they lead to very large scale and ill-posed optimisation problems. By contrast, although low level discretisation suffers from the loss of information, it benefits from low computational cost. In addition, a coarser representation of a fine image problem could be treated as a relaxation to the problem, i.e. the coarse problem is less ill-conditioned. Therefore, propagating a solution of a good coarse approximation to the fine problem could potentially improve the fine level. With the aim of utilising low level information within the high level process, we propose a multilevel optimisation method to solve the convex composite optimisation problem. This problem consists of the minimisation of the sum of a smooth convex function and a simple non-smooth convex function. The method iterates between fine and coarse levels of discretisation in the sense that the search direction is computed using information from either the gradient or a solution of the coarse model. We show that the proposed algorithm is a contraction on the optimal solution and demonstrate excellent performance on experiments with image restoration problems.Open Acces
    • …
    corecore