247 research outputs found
Using a Factored Dual in Augmented Lagrangian Methods for Semidefinite Programming
In the context of augmented Lagrangian approaches for solving semidefinite
programming problems, we investigate the possibility of eliminating the
positive semidefinite constraint on the dual matrix by employing a
factorization. Hints on how to deal with the resulting unconstrained
maximization of the augmented Lagrangian are given. We further use the
approximate maximum of the augmented Lagrangian with the aim of improving the
convergence rate of alternating direction augmented Lagrangian frameworks.
Numerical results are reported, showing the benefits of the approach.Comment: 7 page
GMRES-Accelerated ADMM for Quadratic Objectives
We consider the sequence acceleration problem for the alternating direction
method-of-multipliers (ADMM) applied to a class of equality-constrained
problems with strongly convex quadratic objectives, which frequently arise as
the Newton subproblem of interior-point methods. Within this context, the ADMM
update equations are linear, the iterates are confined within a Krylov
subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its
ability to accelerate convergence. The basic ADMM method solves a
-conditioned problem in iterations. We give
theoretical justification and numerical evidence that the GMRES-accelerated
variant consistently solves the same problem in iterations
for an order-of-magnitude reduction in iterations, despite a worst-case bound
of iterations. The method is shown to be competitive against
standard preconditioned Krylov subspace methods for saddle-point problems. The
method is embedded within SeDuMi, a popular open-source solver for conic
optimization written in MATLAB, and used to solve many large-scale semidefinite
programs with error that decreases like , instead of ,
where is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on
Optimization (SIOPT
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
The affine rank minimization problem consists of finding a matrix of minimum
rank that satisfies a given system of linear equality constraints. Such
problems have appeared in the literature of a diverse set of fields including
system identification and control, Euclidean embedding, and collaborative
filtering. Although specific instances can often be solved with specialized
algorithms, the general affine rank minimization problem is NP-hard. In this
paper, we show that if a certain restricted isometry property holds for the
linear transformation defining the constraints, the minimum rank solution can
be recovered by solving a convex optimization problem, namely the minimization
of the nuclear norm over the given affine space. We present several random
ensembles of equations where the restricted isometry property holds with
overwhelming probability. The techniques used in our analysis have strong
parallels in the compressed sensing framework. We discuss how affine rank
minimization generalizes this pre-existing concept and outline a dictionary
relating concepts from cardinality minimization to those of rank minimization
Scalable Semidefinite Programming
Semidefinite programming (SDP) is a powerful framework from convex optimization that has striking potential for data science applications. This paper develops a provably correct algorithm for solving large SDP problems by economizing on both the storage and the arithmetic costs. Numerical evidence shows that the method is effective for a range of applications, including relaxations of MaxCut, abstract phase retrieval, and quadratic assignment. Running on a laptop, the algorithm can handle SDP instances where the matrix variable has over 10¹³ entries
Some Preconditioning Techniques for Saddle Point Problems
Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud
\ud
The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336
COSMO: A conic operator splitting method for convex conic problems
This paper describes the Conic Operator Splitting Method (COSMO) solver, an
operator splitting algorithm for convex optimisation problems with quadratic
objective function and conic constraints. At each step the algorithm alternates
between solving a quasi-definite linear system with a constant coefficient
matrix and a projection onto convex sets. The low per-iteration computational
cost makes the method particularly efficient for large problems, e.g.
semidefinite programs that arise in portfolio optimisation, graph theory, and
robust control. Moreover, the solver uses chordal decomposition techniques and
a new clique merging algorithm to effectively exploit sparsity in large,
structured semidefinite programs. A number of benchmarks against other
state-of-the-art solvers for a variety of problems show the effectiveness of
our approach. Our Julia implementation is open-source, designed to be extended
and customised by the user, and is integrated into the Julia optimisation
ecosystem.Comment: 45 pages, 11 figure
- …