6,962 research outputs found
SOS-convex Semi-algebraic Programs and its Applications to Robust Optimization: A Tractable Class of Nonsmooth Convex Optimization
In this paper, we introduce a new class of nonsmooth convex functions called
SOS-convex semialgebraic functions extending the recently proposed notion of
SOS-convex polynomials. This class of nonsmooth convex functions covers many
common nonsmooth functions arising in the applications such as the Euclidean
norm, the maximum eigenvalue function and the least squares functions with
-regularization or elastic net regularization used in statistics and
compressed sensing. We show that, under commonly used strict feasibility
conditions, the optimal value and an optimal solution of SOS-convex
semi-algebraic programs can be found by solving a single semi-definite
programming problem (SDP). We achieve the results by using tools from
semi-algebraic geometry, convex-concave minimax theorem and a recently
established Jensen inequality type result for SOS-convex polynomials. As an
application, we outline how the derived results can be applied to show that
robust SOS-convex optimization problems under restricted spectrahedron data
uncertainty enjoy exact SDP relaxations. This extends the existing exact SDP
relaxation result for restricted ellipsoidal data uncertainty and answers the
open questions left in [Optimization Letters 9, 1-18(2015)] on how to recover a
robust solution from the semi-definite programming relaxation in this broader
setting
An Exponential Lower Bound on the Complexity of Regularization Paths
For a variety of regularized optimization problems in machine learning,
algorithms computing the entire solution path have been developed recently.
Most of these methods are quadratic programs that are parameterized by a single
parameter, as for example the Support Vector Machine (SVM). Solution path
algorithms do not only compute the solution for one particular value of the
regularization parameter but the entire path of solutions, making the selection
of an optimal parameter much easier.
It has been assumed that these piecewise linear solution paths have only
linear complexity, i.e. linearly many bends. We prove that for the support
vector machine this complexity can be exponential in the number of training
points in the worst case. More strongly, we construct a single instance of n
input points in d dimensions for an SVM such that at least \Theta(2^{n/2}) =
\Theta(2^d) many distinct subsets of support vectors occur as the
regularization parameter changes.Comment: Journal version, 28 Pages, 5 Figure
Sequential Convex Programming Methods for Solving Nonlinear Optimization Problems with DC constraints
This paper investigates the relation between sequential convex programming
(SCP) as, e.g., defined in [24] and DC (difference of two convex functions)
programming. We first present an SCP algorithm for solving nonlinear
optimization problems with DC constraints and prove its convergence. Then we
combine the proposed algorithm with a relaxation technique to handle
inconsistent linearizations. Numerical tests are performed to investigate the
behaviour of the class of algorithms.Comment: 18 pages, 1 figur
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …