543 research outputs found
Schatten- Quasi-Norm Regularized Matrix Optimization via Iterative Reweighted Singular Value Minimization
In this paper we study general Schatten- quasi-norm (SPQN) regularized
matrix minimization problems. In particular, we first introduce a class of
first-order stationary points for them, and show that the first-order
stationary points introduced in [11] for an SPQN regularized
minimization problem are equivalent to those of an SPQN regularized
minimization reformulation. We also show that any local minimizer of the SPQN
regularized matrix minimization problems must be a first-order stationary
point. Moreover, we derive lower bounds for nonzero singular values of the
first-order stationary points and hence also of the local minimizers of the
SPQN regularized matrix minimization problems. The iterative reweighted
singular value minimization (IRSVM) methods are then proposed to solve these
problems, whose subproblems are shown to have a closed-form solution. In
contrast to the analogous methods for the SPQN regularized
minimization problems, the convergence analysis of these methods is
significantly more challenging. We develop a novel approach to establishing the
convergence of these methods, which makes use of the expression of a specific
solution of their subproblems and avoids the intricate issue of finding the
explicit expression for the Clarke subdifferential of the objective of their
subproblems. In particular, we show that any accumulation point of the sequence
generated by the IRSVM methods is a first-order stationary point of the
problems. Our computational results demonstrate that the IRSVM methods
generally outperform some recently developed state-of-the-art methods in terms
of solution quality and/or speed.Comment: This paper has been withdrawn by the author due to major revision and
correction
Convergence of the Forward-Backward Algorithm: Beyond the Worst Case with the Help of Geometry
We provide a comprehensive study of the convergence of forward-backward
algorithm under suitable geometric conditions leading to fast rates. We present
several new results and collect in a unified view a variety of results
scattered in the literature, often providing simplified proofs. Novel
contributions include the analysis of infinite dimensional convex minimization
problems, allowing the case where minimizers might not exist. Further, we
analyze the relation between different geometric conditions, and discuss novel
connections with a priori conditions in linear inverse problems, including
source conditions, restricted isometry properties and partial smoothness
From error bounds to the complexity of first-order descent methods for convex functions
This paper shows that error bounds can be used as effective tools for
deriving complexity results for first-order descent methods in convex
minimization. In a first stage, this objective led us to revisit the interplay
between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can
show the equivalence between the two concepts for convex functions having a
moderately flat profile near the set of minimizers (as those of functions with
H\"olderian growth). A counterexample shows that the equivalence is no longer
true for extremely flat functions. This fact reveals the relevance of an
approach based on KL inequality. In a second stage, we show how KL inequalities
can in turn be employed to compute new complexity bounds for a wealth of
descent methods for convex problems. Our approach is completely original and
makes use of a one-dimensional worst-case proximal sequence in the spirit of
the famous majorant method of Kantorovich. Our result applies to a very simple
abstract scheme that covers a wide class of descent methods. As a byproduct of
our study, we also provide new results for the globalization of KL inequalities
in the convex framework.
Our main results inaugurate a simple methodology: derive an error bound,
compute the desingularizing function whenever possible, identify essential
constants in the descent method and finally compute the complexity using the
one-dimensional worst case proximal sequence. Our method is illustrated through
projection methods for feasibility problems, and through the famous iterative
shrinkage thresholding algorithm (ISTA), for which we show that the complexity
bound is of the form where the constituents of the bound only depend
on error bound constants obtained for an arbitrary least squares objective with
regularization
- …