603 research outputs found
A Strict Complementarity Approach to Error Bound and Sensitivity of Solution of Conic Programs
In this paper, we provide an elementary, geometric, and unified framework to
analyze conic programs that we call the strict complementarity approach. This
framework allows us to establish error bounds and quantify the sensitivity of
the solution. The framework uses three classical ideas from convex geometry and
linear algebra: linear regularity of convex sets, facial reduction, and
orthogonal decomposition. We show how to use this framework to derive error
bounds for linear programming (LP), second order cone programming (SOCP), and
semidefinite programming (SDP).Comment: 19 pages, 2 figure
Revisit of Spectral Bundle Methods: Primal-dual (Sub)linear Convergence Rates
The spectral bundle method proposed by Helmberg and Rendl is well established
for solving large-scale semidefinite programs (SDP) thanks to its low per
iteration computational complexity and strong practical performance. In this
paper, we revisit this classic method show-ing it achieves sublinear
convergence rates in terms of both primal and dual SDPs under merely strong
duality, complementing previous guarantees on primal-dual convergence.
Moreover, we show the method speeds up to linear convergence if (1)
structurally, the SDP admits strict complementarity, and (2) algorithmically,
the bundle method captures the rank of the optimal solutions. Such
complementary and low rank structure is prevalent in many modern and classical
applications. The linear convergent result is established via an eigenvalue
approximation lemma which might be of independent interests. Numerically, we
confirm our theoretical findings that the spectral bundle method, for modern
and classical applications, indeed speeds up under aforementioned conditionComment: 30 pages and 2 figure
Sharpness and well-conditioning of nonsmooth convex formulations in statistical signal recovery
We study a sample complexity vs. conditioning tradeoff in modern signal
recovery problems where convex optimization problems are built from sampled
observations. We begin by introducing a set of condition numbers related to
sharpness in or Schatten-p norms () based on nonsmooth
reformulations of a class of convex optimization problems, including sparse
recovery, low-rank matrix sensing, covariance estimation, and (abstract) phase
retrieval. In each of the recovery tasks, we show that the condition numbers
become dimension independent constants once the sample size exceeds some
constant multiple of the recovery threshold. Structurally, this result ensures
that the inaccuracy in the recovered signal due to both observation noise and
optimization error is well-controlled. Algorithmically, such a result ensures
that a new first-order method for solving the class of sharp convex functions
in a given or Schatten-p norm, when applied to the nonsmooth
formulations, achieves nearly-dimension-independent linear convergence
- β¦