120 research outputs found
Quantitative performance modeling of scientific computations and creating locality in numerical algorithms
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (p. 141-150) and index.by Sivan Avraham Toledo.Ph.D
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Q(sqrt(-3))-Integral Points on a Mordell Curve
We use an extension of quadratic Chabauty to number fields,recently developed by the author with Balakrishnan, Besser and M ̈uller,combined with a sieving technique, to determine the integral points overQ(√−3) on the Mordell curve y2 = x3 − 4
Cohomological Aspects of Hamiltonian Group Actions and Toric Varieties
[no abstract available
Méthodes sans factorisation pour l’optimisation non linéaire
RÉSUMÉ : Cette thèse a pour objectif de formuler mathématiquement, d'analyser et d'implémenter deux méthodes sans factorisation pour l'optimisation non linéaire. Dans les problèmes de grande taille, la jacobienne des contraintes n'est souvent pas disponible sous forme de matrice; seules son action et celle de sa transposée sur un vecteur le sont. L'optimisation sans factorisation consiste alors à utiliser des opérateurs linéaires abstraits représentant la jacobienne ou le hessien. De ce fait, seules les actions > sont autorisées et l'algèbre linéaire directe doit être remplacée par des méthodes itératives. Outre ces restrictions, une grande difficulté lors de l'introduction de méthodes sans factorisation dans des algorithmes d'optimisation concerne le contrôle de l'inexactitude de la résolution des systèmes linéaires. Il faut en effet s'assurer que la direction calculée est suffisamment précise pour garantir la convergence de l'algorithme concerné. En premier lieu, nous décrivons l'implémentation sans factorisation d'une méthode de lagrangien augmenté pouvant utiliser des approximations quasi-Newton des dérivées secondes. Nous montrons aussi que notre approche parvient à résoudre des problèmes d'optimisation de structure avec des milliers de variables et contraintes alors que les méthodes avec factorisation échouent. Afin d'obtenir une méthode possédant une convergence plus rapide, nous présentons ensuite un algorithme qui utilise un lagrangien augmenté proximal comme fonction de mérite et qui, asymptotiquement, se transforme en une méthode de programmation quadratique séquentielle stabilisée. L'utilisation d'approximations BFGS à mémoire limitée du hessien du lagrangien conduit à l'obtention de systèmes linéaires symétriques quasi-définis. Ceux-ci sont interprétés comme étant les conditions d'optimalité d'un problème aux moindres carrés linéaire, qui est résolu de manière inexacte par une méthode de Krylov. L'inexactitude de cette résolution est contrôlée par un critère d'arrêt facile à mettre en œuvre. Des tests numériques démontrent l'efficacité et la robustesse de notre méthode, qui se compare très favorablement à IPOPT, en particulier pour les problèmes dégénérés pour lesquels la LICQ n'est pas respectée à la solution ou lors de la minimisation. Finalement, l'écosystème de développement d'algorithmes d'optimisation en Python, baptisé NLP.py, est exposé. Cet environnement s'adresse aussi bien aux chercheurs en optimisation qu'aux étudiants désireux de découvrir ou d'approfondir l'optimisation. NLP.py donne accès à un ensemble de blocs constituant les éléments les plus importants des méthodes d'optimisation continue. Grâce à ceux-ci, le chercheur est en mesure d'implémenter son algorithme en se concentrant sur la logique de celui-ci plutôt que sur les subtilités techniques de son implémentation.----------ABSTRACT : This thesis focuses on the mathematical formulation, analysis and implementation of two factorization-free methods for nonlinear constrained optimization. In large-scale optimization, the Jacobian of the constraints may not be available in matrix form; only its action and that of its transpose on a vector are. Factorization-free optimization employs abstract linear operators representing the Jacobian or Hessian matrices. Therefore, only operator-vector products are allowed and direct linear algebra is replaced by iterative methods. Besides these implementation restrictions, a difficulty inherent to methods without factorization in optimization algorithms is the control of the inaccuracy in linear system solves. Indeed, we have to guarantee that the direction calculated is sufficiently accurate to ensure convergence. We first describe a factorization-free implementation of a classical augmented Lagrangian method that may use quasi-Newton second derivatives approximations. This method is applied to problems with thousands of variables and constraints coming from aircraft structural design optimization, for which methods based on factorizations fail. Results show that it is a viable approach for these problems. In order to obtain a method with a faster convergence rate, we present an algorithm that uses a proximal augmented Lagrangian as merit function and that asymptotically turns in a stabilized sequential quadratic programming method. The use of limited-memory BFGS approximations of the Hessian of the Lagrangian combined with regularization of the constraints leads to symmetric quasi-definite linear systems. Because such systems may be interpreted as the KKT conditions of linear least-squares problems, they can be efficiently solved using an appropriate Krylov method. Inaccuracy of their solutions is controlled by a stopping criterion which is easy to implement. Numerical tests demonstrate the effectiveness and robustness of our method, which compares very favorably with IPOPT, especially for degenerate problems for which LICQ is not satisfied at the optimal solution or during the minimization process. Finally, an ecosystem for optimization algorithm development in Python, code-named NLP.py, is exposed. This environment is aimed at researchers in optimization and students eager to discover or strengthen their knowledge in optimization. NLP.py provides access to a set of building blocks constituting the most important elements of continuous optimization methods. With these blocks, users are able to implement their own algorithm focusing on the logic of the algorithm rather than on the technicalities of its implementation
Recommended from our members
Computational Methods for Nonlinear Optimization Problems: Theory and Applications
This dissertation is motivated by the lack of efficient global optimization techniques for polynomial optimization problems. The objective is twofold. First, a new mathematical foundation for obtaining a global or near-global solution will be developed. Second, several case studies will be conducted on a variety of real-world problems. Global optimization, convex relaxation and distributed computation are at the heart of this PhD dissertation. Some of the specific problems to be addressed in this thesis on both the theory and the application of nonlinear optimization are explained below:
Graph theoretic algorithms for low-rank optimization problems: There is a rapidly growing interest in the recovery of an unknown low-rank matrix from limited information and measurements. This problem occurs in many areas of engineering and applied science such as machine learning, control, and computer vision. We develop a graph-theoretic technique in Part I that is able to generate a low-rank solution for a sparse Linear Matrix Inequality (LMI), which is directly applicable to a large set of problems such as low-rank matrix completion with many unknown entries. Our approach finds a solution with a guarantee on its rank, using the recent advances in graph theory.
Resource allocation for energy systems: The flows in an electrical grid are described by nonlinear AC power flow equations. Due to the nonlinear interrelation among physical parameters of the network, the feasibility region represented by power flow equations may be nonconvex and disconnected. Since 1962, the nonlinearity of the network constraints has been studied, and various heuristic and local-search algorithms have been proposed in order to perform optimization over an electrical grid [Baldick, 2006; Pandya and Joshi, 2008]. Part II is concerned with finding convex formulations of the power flow equations using semidefinite programming (SDP). The potential of SDP relaxation for problems in power systems has been manifested in [Lavaei and Low, 2012], with further studies conducted in [Lavaei, 2011; Sojoudi and Lavaei, 2012]. A variety of graph-theoretic and algebraic methods are developed in Part II in order to facilitate performing fundamental, yet challenging tasks such as optimal power flow (OPF) problem, security-constrained OPF and the classical power flow problem.
Synthesis of distributed control systems: Real-world systems mostly consist of many interconnected subsystems, and designing an optimal controller for them pose several challenges to the field of control theory. The area of distributed control is created to address the challenges arising in the control of these systems. The objective is to design a constrained controller whose structure is specified by a set of permissible interactions between the local controllers with the aim of reducing the computation or communication complexity of the overall controller. It has been long known that the design of an optimal distributed (decentralized) controller is a daunting task because it amounts to an NP-hard optimization problem in general [Witsenhausen, 1968; Tsitsiklis and Athans, 1984]. Part III is devoted to study the potential of the SDP relaxation for the optimal distributed control (ODC) problem Our approach rests on formulating each of different variations of the ODC problem as rank-constrained optimization problems from which SDP relaxations can be derived. As the first contribution, we show that the ODC problem admits a sparse SDP relaxation with solutions of rank at most 3. Since a rank-1 SDP matrix can be mapped back into a globally-optimal controller, the low-rank SDP solution may be deployed to retrieve a near-global controller.
Parallel computation for sparse semidefinite programs: While small- to medium-sized semidefinite programs are efficiently solvable by second-order-based interior point methods in polynomial time up to any arbitrary precision [Vandenberghe and Boyd, 1996a], these methods are impractical for solving large-scale SDPs due to computation time and memory issues. In Part IV of this dissertation, a parallel algorithm for solving an arbitrary SDP is introduced based on the alternating direction method of multipliers. The proposed algorithm has a guaranteed convergence under very mild assumptions. Each iteration of this algorithm has a simple closed-form solution, and consists of scalar multiplication and eigenvalue decomposition over matrices whose sizes are not greater than the treewdith of the sparsity graph of the SDP problem. The cheap iterations of the proposed algorithm enable solving real-world large-scale conic optimization problems
- …