495 research outputs found
A variable metric forward--backward method with extrapolation
Forward-backward methods are a very useful tool for the minimization of a
functional given by the sum of a differentiable term and a nondifferentiable
one and their investigation has experienced several efforts from many
researchers in the last decade. In this paper we focus on the convex case and,
inspired by recent approaches for accelerating first-order iterative schemes,
we develop a scaled inertial forward-backward algorithm which is based on a
metric changing at each iteration and on a suitable extrapolation step. Unlike
standard forward-backward methods with extrapolation, our scheme is able to
handle functions whose domain is not the entire space. Both {an convergence rate estimate on the objective function values and the
convergence of the sequence of the iterates} are proved. Numerical experiments
on several {test problems arising from image processing, compressed sensing and
statistical inference} show the {effectiveness} of the proposed method in
comparison to well performing {state-of-the-art} algorithms
New convergence results for the scaled gradient projection method
The aim of this paper is to deepen the convergence analysis of the scaled
gradient projection (SGP) method, proposed by Bonettini et al. in a recent
paper for constrained smooth optimization. The main feature of SGP is the
presence of a variable scaling matrix multiplying the gradient, which may
change at each iteration. In the last few years, an extensive numerical
experimentation showed that SGP equipped with a suitable choice of the scaling
matrix is a very effective tool for solving large scale variational problems
arising in image and signal processing. In spite of the very reliable numerical
results observed, only a weak, though very general, convergence theorem is
provided, establishing that any limit point of the sequence generated by SGP is
stationary. Here, under the only assumption that the objective function is
convex and that a solution exists, we prove that the sequence generated by SGP
converges to a minimum point, if the scaling matrices sequence satisfies a
simple and implementable condition. Moreover, assuming that the gradient of the
objective function is Lipschitz continuous, we are also able to prove the
O(1/k) convergence rate with respect to the objective function values. Finally,
we present the results of a numerical experience on some relevant image
restoration problems, showing that the proposed scaling matrix selection rule
performs well also from the computational point of view
An abstract convergence framework with application to inertial inexact forward--backward methods
In this paper we introduce a novel abstract descent scheme suited for the
minimization of proper and lower semicontinuous functions. The proposed
abstract scheme generalizes a set of properties that are crucial for the
convergence of several first-order methods designed for nonsmooth nonconvex
optimization problems. Such properties guarantee the convergence of the full
sequence of iterates to a stationary point, if the objective function satisfies
the Kurdyka-Lojasiewicz property. The abstract framework allows for the design
of new algorithms. We propose two inertial-type algorithms with implementable
inexactness criteria for the main iteration update step. The first algorithm,
iPiano, exploits large steps by adjusting a local Lipschitz constant. The
second algorithm, iPila, overcomes the main drawback of line-search based
methods by enforcing a descent only on a merit function instead of the
objective function. Both algorithms have the potential to escape local
minimizers (or stationary points) by leveraging the inertial feature. Moreover,
they are proved to enjoy the full convergence guarantees of the abstract
descent scheme, which is the best we can expect in such a general nonsmooth
nonconvex optimization setup using first-order methods. The efficiency of the
proposed algorithms is demonstrated on two exemplary image deblurring problems,
where we can appreciate the benefits of performing a linesearch along the
descent direction inside an inertial scheme.Comment: 37 pages, 8 figure
Activity Identification and Local Linear Convergence of Forward--Backward-type methods
In this paper, we consider a class of Forward--Backward (FB) splitting
methods that includes several variants (e.g. inertial schemes, FISTA) for
minimizing the sum of two proper convex and lower semi-continuous functions,
one of which has a Lipschitz continuous gradient, and the other is partly
smooth relatively to a smooth active manifold . We propose a
unified framework, under which we show that, this class of FB-type algorithms
(i) correctly identifies the active manifolds in a finite number of iterations
(finite activity identification), and (ii) then enters a local linear
convergence regime, which we characterize precisely in terms of the structure
of the underlying active manifolds. For simpler problems involving polyhedral
functions, we show finite termination. We also establish and explain why FISTA
(with convergent sequences) locally oscillates and can be slower than FB. These
results may have numerous applications including in signal/image processing,
sparse recovery and machine learning. Indeed, the obtained results explain the
typical behaviour that has been observed numerically for many problems in these
fields such as the Lasso, the group Lasso, the fused Lasso and the nuclear norm
regularization to name only a few.Comment: Full length version of the previous short on
On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence
We introduce a framework for quasi-Newton forward--backward splitting
algorithms (proximal quasi-Newton methods) with a metric induced by diagonal
rank- symmetric positive definite matrices. This special type of
metric allows for a highly efficient evaluation of the proximal mapping. The
key to this efficiency is a general proximal calculus in the new metric. By
using duality, formulas are derived that relate the proximal mapping in a
rank- modified metric to the original metric. We also describe efficient
implementations of the proximity calculation for a large class of functions;
the implementations exploit the piece-wise linear nature of the dual problem.
Then, we apply these results to acceleration of composite convex minimization
problems, which leads to elegant quasi-Newton methods for which we prove
convergence. The algorithm is tested on several numerical examples and compared
to a comprehensive list of alternatives in the literature. Our quasi-Newton
splitting algorithm with the prescribed metric compares favorably against
state-of-the-art. The algorithm has extensive applications including signal
processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
An inertial forward-backward algorithm for monotone inclusions
In this paper, we propose an inertial forward backward splitting algorithm to
compute a zero of the sum of two monotone operators, with one of the two
operators being co-coercive. The algorithm is inspired by the accelerated
gradient method of Nesterov, but can be applied to a much larger class of
problems including convex-concave saddle point problems and general monotone
inclusions. We prove convergence of the algorithm in a Hilbert space setting
and show that several recently proposed first-order methods can be obtained as
special cases of the general algorithm. Numerical results show that the
proposed algorithm converges faster than existing methods, while keeping the
computational cost of each iteration basically unchanged.Comment: The final publication is available at http://link.springer.co
- …