185 research outputs found

    COSMO: A conic operator splitting method for convex conic problems

    Full text link
    This paper describes the Conic Operator Splitting Method (COSMO) solver, an operator splitting algorithm for convex optimisation problems with quadratic objective function and conic constraints. At each step the algorithm alternates between solving a quasi-definite linear system with a constant coefficient matrix and a projection onto convex sets. The low per-iteration computational cost makes the method particularly efficient for large problems, e.g. semidefinite programs that arise in portfolio optimisation, graph theory, and robust control. Moreover, the solver uses chordal decomposition techniques and a new clique merging algorithm to effectively exploit sparsity in large, structured semidefinite programs. A number of benchmarks against other state-of-the-art solvers for a variety of problems show the effectiveness of our approach. Our Julia implementation is open-source, designed to be extended and customised by the user, and is integrated into the Julia optimisation ecosystem.Comment: 45 pages, 11 figure

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Convex Identifcation of Stable Dynamical Systems

    Get PDF
    This thesis concerns the scalable application of convex optimization to data-driven modeling of dynamical systems, termed system identi cation in the control community. Two problems commonly arising in system identi cation are model instability (e.g. unreliability of long-term, open-loop predictions), and nonconvexity of quality-of- t criteria, such as simulation error (a.k.a. output error). To address these problems, this thesis presents convex parametrizations of stable dynamical systems, convex quality-of- t criteria, and e cient algorithms to optimize the latter over the former. In particular, this thesis makes extensive use of Lagrangian relaxation, a technique for generating convex approximations to nonconvex optimization problems. Recently, Lagrangian relaxation has been used to approximate simulation error and guarantee nonlinear model stability via semide nite programming (SDP), however, the resulting SDPs have large dimension, limiting their practical utility. The rst contribution of this thesis is a custom interior point algorithm that exploits structure in the problem to signi cantly reduce computational complexity. The new algorithm enables empirical comparisons to established methods including Nonlinear ARX, in which superior generalization to new data is demonstrated. Equipped with this algorithmic machinery, the second contribution of this thesis is the incorporation of model stability constraints into the maximum likelihood framework. Speci - cally, Lagrangian relaxation is combined with the expectation maximization (EM) algorithm to derive tight bounds on the likelihood function, that can be optimized over a convex parametrization of all stable linear dynamical systems. Two di erent formulations are presented, one of which gives higher delity bounds when disturbances (a.k.a. process noise) dominate measurement noise, and vice versa. Finally, identi cation of positive systems is considered. Such systems enjoy substantially simpler stability and performance analysis compared to the general linear time-invariant iv Abstract (LTI) case, and appear frequently in applications where physical constraints imply nonnegativity of the quantities of interest. Lagrangian relaxation is used to derive new convex parametrizations of stable positive systems and quality-of- t criteria, and substantial improvements in accuracy of the identi ed models, compared to existing approaches based on weighted equation error, are demonstrated. Furthermore, the convex parametrizations of stable systems based on linear Lyapunov functions are shown to be amenable to distributed optimization, which is useful for identi cation of large-scale networked dynamical systems

    Semidefinite Programming. methods and algorithms for energy management

    Get PDF
    La présente thèse a pour objet d explorer les potentialités d une méthode prometteuse de l optimisation conique, la programmation semi-définie positive (SDP), pour les problèmes de management d énergie, à savoir relatifs à la satisfaction des équilibres offre-demande électrique et gazier.Nos travaux se déclinent selon deux axes. Tout d abord nous nous intéressons à l utilisation de la SDP pour produire des relaxations de problèmes combinatoires et quadratiques. Si une relaxation SDP dite standard peut être élaborée très simplement, il est généralement souhaitable de la renforcer par des coupes, pouvant être déterminées par l'étude de la structure du problème ou à l'aide de méthodes plus systématiques. Nous mettons en œuvre ces deux approches sur différentes modélisations du problème de planification des arrêts nucléaires, réputé pour sa difficulté combinatoire. Nous terminons sur ce sujet par une expérimentation de la hiérarchie de Lasserre, donnant lieu à une suite de SDP dont la valeur optimale tend vers la solution du problème initial.Le second axe de la thèse porte sur l'application de la SDP à la prise en compte de l'incertitude. Nous mettons en œuvre une approche originale dénommée optimisation distributionnellement robuste , pouvant être vue comme un compromis entre optimisation stochastique et optimisation robuste et menant à des approximations sous forme de SDP. Nous nous appliquons à estimer l'apport de cette approche sur un problème d'équilibre offre-demande avec incertitude. Puis, nous présentons une relaxation SDP pour les problèmes MISOCP. Cette relaxation se révèle être de très bonne qualité, tout en ne nécessitant qu un temps de calcul raisonnable. La SDP se confirme donc être une méthode d optimisation prometteuse qui offre de nombreuses opportunités d'innovation en management d énergie.The present thesis aims at exploring the potentialities of a powerful optimization technique, namely Semidefinite Programming, for addressing some difficult problems of energy management. We pursue two main objectives. The first one consists of using SDP to provide tight relaxations of combinatorial and quadratic problems. A first relaxation, called standard can be derived in a generic way but it is generally desirable to reinforce them, by means of tailor-made tools or in a systematic fashion. These two approaches are implemented on different models of the Nuclear Outages Scheduling Problem, a famous combinatorial problem. We conclude this topic by experimenting the Lasserre's hierarchy on this problem, leading to a sequence of semidefinite relaxations whose optimal values tends to the optimal value of the initial problem.The second objective deals with the use of SDP for the treatment of uncertainty. We investigate an original approach called distributionnally robust optimization , that can be seen as a compromise between stochastic and robust optimization and admits approximations under the form of a SDP. We compare the benefits of this method w.r.t classical approaches on a demand/supply equilibrium problem. Finally, we propose a scheme for deriving SDP relaxations of MISOCP and we report promising computational results indicating that the semidefinite relaxation improves significantly the continuous relaxation, while requiring a reasonable computational effort.SDP therefore proves to be a promising optimization method that offers great opportunities for innovation in energy management.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF

    Book reports

    Get PDF

    Generative-Discriminative Low Rank Decomposition for Medical Imaging Applications

    Get PDF
    In this thesis, we propose a method that can be used to extract biomarkers from medical images toward early diagnosis of abnormalities. Surge of demand for biomarkers and availability of medical images in the recent years call for accurate, repeatable, and interpretable approaches for extracting meaningful imaging features. However, extracting such information from medical images is a challenging task because the number of pixels (voxels) in a typical image is in order of millions while even a large sample-size in medical image dataset does not usually exceed a few hundred. Nevertheless, depending on the nature of an abnormality, only a parsimonious subset of voxels is typically relevant to the disease; therefore various notions of sparsity are exploited in this thesis to improve the generalization performance of the prediction task. We propose a novel discriminative dimensionality reduction method that yields good classification performance on various datasets without compromising the clinical interpretability of the results. This is achieved by combining the modelling strength of generative learning framework and the classification performance of discriminative learning paradigm. Clinical interpretability can be viewed as an additional measure of evaluation and is also helpful in designing methods that account for the clinical prior such as association of certain areas in a brain to a particular cognitive task or connectivity of some brain regions via neural fibres. We formulate our method as a large-scale optimization problem to solve a constrained matrix factorization. Finding an optimal solution of the large-scale matrix factorization renders off-the-shelf solver computationally prohibitive; therefore, we designed an efficient algorithm based on the proximal method to address the computational bottle-neck of the optimization problem. Our formulation is readily extended for different scenarios such as cases where a large cohort of subjects has uncertain or no class labels (semi-supervised learning) or a case where each subject has a battery of imaging channels (multi-channel), \etc. We show that by using various notions of sparsity as feasible sets of the optimization problem, we can encode different forms of prior knowledge ranging from brain parcellation to brain connectivity

    Distributed Optimization with Application to Power Systems and Control

    Get PDF
    In many engineering domains, systems are composed of partially independent subsystems—power systems are composed of distribution and transmission systems, teams of robots are composed of individual robots, and chemical process systems are composed of vessels, heat exchangers and reactors. Often, these subsystems should reach a common goal such as satisfying a power demand with minimum cost, flying in a formation, or reaching an optimal set-point. At the same time, limited information exchange is desirable—for confidentiality reasons but also due to communication constraints. Moreover, a fast and reliable decision process is key as applications might be safety-critical. Mathematical optimization techniques are among the most successful tools for controlling systems optimally with feasibility guarantees. Yet, they are often centralized—all data has to be collected in one central and computationally powerful entity. Methods from distributed optimization control the subsystems in a distributed or decentralized fashion, reducing or avoiding central coordination. These methods have a long and successful history. Classical distributed optimization algorithms, however, are typically designed for convex problems. Hence, they are only partially applicable in the above domains since many of them lead to optimization problems with non-convex constraints. This thesis develops one of the first frameworks for distributed and decentralized optimization with non-convex constraints. Based on the Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) algorithm, a bi-level distributed ALADIN framework is presented, solving the coordination step of ALADIN in a decentralized fashion. This framework can handle various decentralized inner algorithms, two of which we develop here: a decentralized variant of the Alternating Direction Method of Multipliers (ADMM) and a novel decentralized Conjugate Gradient algorithm. Decentralized conjugate gradient is to the best of our knowledge the first decentralized algorithm with a guarantee of convergence to the exact solution in a finite number of iterates. Sufficient conditions for fast local convergence of bi-level ALADIN are derived. Bi-level ALADIN strongly reduces the communication and coordination effort of ALADIN and preserves its fast convergence guarantees. We illustrate these properties on challenging problems from power systems and control, and compare performance to the widely used ADMM. The developed methods are implemented in the open-source MATLAB toolbox ALADIN-—one of the first toolboxes for decentralized non-convex optimization. ALADIN- comes with a rich set of application examples from different domains showing its broad applicability. As an additional contribution, this thesis provides new insights why state-of-the-art distributed algorithms might encounter issues for constrained problems
    • …
    corecore