3,284 research outputs found
Local Convergence of Proximal Splitting Methods for Rank Constrained Problems
We analyze the local convergence of proximal splitting algorithms to solve
optimization problems that are convex besides a rank constraint. For this, we
show conditions under which the proximal operator of a function involving the
rank constraint is locally identical to the proximal operator of its convex
envelope, hence implying local convergence. The conditions imply that the
non-convex algorithms locally converge to a solution whenever a convex
relaxation involving the convex envelope can be expected to solve the
non-convex problem.Comment: To be presented at the 56th IEEE Conference on Decision and Control,
Melbourne, Dec 201
On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence
We introduce a framework for quasi-Newton forward--backward splitting
algorithms (proximal quasi-Newton methods) with a metric induced by diagonal
rank- symmetric positive definite matrices. This special type of
metric allows for a highly efficient evaluation of the proximal mapping. The
key to this efficiency is a general proximal calculus in the new metric. By
using duality, formulas are derived that relate the proximal mapping in a
rank- modified metric to the original metric. We also describe efficient
implementations of the proximity calculation for a large class of functions;
the implementations exploit the piece-wise linear nature of the dual problem.
Then, we apply these results to acceleration of composite convex minimization
problems, which leads to elegant quasi-Newton methods for which we prove
convergence. The algorithm is tested on several numerical examples and compared
to a comprehensive list of alternatives in the literature. Our quasi-Newton
splitting algorithm with the prescribed metric compares favorably against
state-of-the-art. The algorithm has extensive applications including signal
processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
Distributed Interior-point Method for Loosely Coupled Problems
In this paper, we put forth distributed algorithms for solving loosely
coupled unconstrained and constrained optimization problems. Such problems are
usually solved using algorithms that are based on a combination of
decomposition and first order methods. These algorithms are commonly very slow
and require many iterations to converge. In order to alleviate this issue, we
propose algorithms that combine the Newton and interior-point methods with
proximal splitting methods for solving such problems. Particularly, the
algorithm for solving unconstrained loosely coupled problems, is based on
Newton's method and utilizes proximal splitting to distribute the computations
for calculating the Newton step at each iteration. A combination of this
algorithm and the interior-point method is then used to introduce a distributed
algorithm for solving constrained loosely coupled problems. We also provide
guidelines on how to implement the proposed methods efficiently and briefly
discuss the properties of the resulting solutions.Comment: Submitted to the 19th IFAC World Congress 201
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …