594 research outputs found
Conic Optimization Theory: Convexification Techniques and Numerical Algorithms
Optimization is at the core of control theory and appears in several areas of
this field, such as optimal control, distributed control, system
identification, robust control, state estimation, model predictive control and
dynamic programming. The recent advances in various topics of modern
optimization have also been revamping the area of machine learning. Motivated
by the crucial role of optimization theory in the design, analysis, control and
operation of real-world systems, this tutorial paper offers a detailed overview
of some major advances in this area, namely conic optimization and its emerging
applications. First, we discuss the importance of conic optimization in
different areas. Then, we explain seminal results on the design of hierarchies
of convex relaxations for a wide range of nonconvex problems. Finally, we study
different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
Conic Optimization: Optimal Partition, Parametric, and Stability Analysis
A linear conic optimization problem consists of the minimization of a linear objective function over the intersection of an affine space and a closed convex cone. In recent years, linear conic optimization has received significant attention, partly due to the fact that we can take advantage of linear conic optimization to reformulate and approximate intractable optimization problems. Steady advances in computational optimization have enabled us to approximately solve a wide variety of linear conic optimization problems in polynomial time. Nevertheless, preprocessing methods, rounding procedures and sensitivity analysis tools are still the missing parts of conic optimization solvers. Given the output of a conic optimization solver, we need methodologies to generate approximate complementary solutions or to speed up the convergence to an exact optimal solution. A preprocessing method reduces the size of a problem by finding the minimal face of the cone which contains the set of feasible solutions. However, such a preprocessing method assumes the knowledge of an exact solution. More importantly, we need robust sensitivity and post-optimal analysis tools for an optimal solution of a linear conic optimization problem. Motivated by the vital importance of linear conic optimization, we take active steps to fill this gap.This thesis is concerned with several aspects of a linear conic optimization problem, from algorithm through solution identification, to parametric analysis, which have not been fully addressed in the literature. We specifically focus on three special classes of linear conic optimization problems, namely semidefinite and second-order conic optimization, and their common generalization, symmetric conic optimization. We propose a polynomial time algorithm for symmetric conic optimization problems. We show how to approximate/identify the optimal partition of semidefinite optimization and second-order conic optimization, a concept which has its origin in linear optimization. Further, we use the optimal partition information to either generate an approximate optimal solution or to speed up the convergence of a solution identification process to the unique optimal solution of the problem. Finally, we study the parametric analysis of semidefinite and second-order conic optimization problems. We investigate the behavior of the optimal partition and the optimal set mapping under perturbation of the objective function vector
COSMO: A conic operator splitting method for convex conic problems
This paper describes the Conic Operator Splitting Method (COSMO) solver, an
operator splitting algorithm for convex optimisation problems with quadratic
objective function and conic constraints. At each step the algorithm alternates
between solving a quasi-definite linear system with a constant coefficient
matrix and a projection onto convex sets. The low per-iteration computational
cost makes the method particularly efficient for large problems, e.g.
semidefinite programs that arise in portfolio optimisation, graph theory, and
robust control. Moreover, the solver uses chordal decomposition techniques and
a new clique merging algorithm to effectively exploit sparsity in large,
structured semidefinite programs. A number of benchmarks against other
state-of-the-art solvers for a variety of problems show the effectiveness of
our approach. Our Julia implementation is open-source, designed to be extended
and customised by the user, and is integrated into the Julia optimisation
ecosystem.Comment: 45 pages, 11 figure
Interior-point algorithms for convex optimization based on primal-dual metrics
We propose and analyse primal-dual interior-point algorithms for convex
optimization problems in conic form. The families of algorithms we analyse are
so-called short-step algorithms and they match the current best iteration
complexity bounds for primal-dual symmetric interior-point algorithm of
Nesterov and Todd, for symmetric cone programming problems with given
self-scaled barriers. Our results apply to any self-concordant barrier for any
convex cone. We also prove that certain specializations of our algorithms to
hyperbolic cone programming problems (which lie strictly between symmetric cone
programming and general convex optimization problems in terms of generality)
can take advantage of the favourable special structure of hyperbolic barriers.
We make new connections to Riemannian geometry, integrals over operator spaces,
Gaussian quadrature, and strengthen the connection of our algorithms to
quasi-Newton updates and hence first-order methods in general.Comment: 36 page
- …