303 research outputs found
Revisit of Spectral Bundle Methods: Primal-dual (Sub)linear Convergence Rates
The spectral bundle method proposed by Helmberg and Rendl is well established
for solving large-scale semidefinite programs (SDP) thanks to its low per
iteration computational complexity and strong practical performance. In this
paper, we revisit this classic method show-ing it achieves sublinear
convergence rates in terms of both primal and dual SDPs under merely strong
duality, complementing previous guarantees on primal-dual convergence.
Moreover, we show the method speeds up to linear convergence if (1)
structurally, the SDP admits strict complementarity, and (2) algorithmically,
the bundle method captures the rank of the optimal solutions. Such
complementary and low rank structure is prevalent in many modern and classical
applications. The linear convergent result is established via an eigenvalue
approximation lemma which might be of independent interests. Numerically, we
confirm our theoretical findings that the spectral bundle method, for modern
and classical applications, indeed speeds up under aforementioned conditionComment: 30 pages and 2 figure
A Riemannian low-rank method for optimization over semidefinite matrices with block-diagonal constraints
We propose a new algorithm to solve optimization problems of the form for a smooth function under the constraints that is positive
semidefinite and the diagonal blocks of are small identity matrices. Such
problems often arise as the result of relaxing a rank constraint (lifting). In
particular, many estimation tasks involving phases, rotations, orthonormal
bases or permutations fit in this framework, and so do certain relaxations of
combinatorial problems such as Max-Cut. The proposed algorithm exploits the
facts that (1) such formulations admit low-rank solutions, and (2) their
rank-restricted versions are smooth optimization problems on a Riemannian
manifold. Combining insights from both the Riemannian and the convex geometries
of the problem, we characterize when second-order critical points of the smooth
problem reveal KKT points of the semidefinite problem. We compare against state
of the art, mature software and find that, on certain interesting problem
instances, what we call the staircase method is orders of magnitude faster, is
more accurate and scales better. Code is available.Comment: 37 pages, 3 figure
A New Preconditioning Approachfor an Interior Point–Proximal Method of Multipliers for Linear and Convex Quadratic Programming
In this paper, we address the efficient numerical solution of linear and
quadratic programming problems, often of large scale. With this aim, we devise
an infeasible interior point method, blended with the proximal method of
multipliers, which in turn results in a primal-dual regularized interior point
method. Application of this method gives rise to a sequence of increasingly
ill-conditioned linear systems which cannot always be solved by factorization
methods, due to memory and CPU time restrictions. We propose a novel
preconditioning strategy which is based on a suitable sparsification of the
normal equations matrix in the linear case, and also constitutes the foundation
of a block-diagonal preconditioner to accelerate MINRES for linear systems
arising from the solution of general quadratic programming problems. Numerical
results for a range of test problems demonstrate the robustness of the proposed
preconditioning strategy, together with its ability to solve linear systems of
very large dimension
- …