5,326 research outputs found
Convergence Rates with Inexact Non-expansive Operators
In this paper, we present a convergence rate analysis for the inexact
Krasnosel'skii-Mann iteration built from nonexpansive operators. Our results
include two main parts: we first establish global pointwise and ergodic
iteration-complexity bounds, and then, under a metric subregularity assumption,
we establish local linear convergence for the distance of the iterates to the
set of fixed points. The obtained iteration-complexity result can be applied to
analyze the convergence rate of various monotone operator splitting methods in
the literature, including the Forward-Backward, the Generalized
Forward-Backward, Douglas-Rachford, alternating direction method of multipliers
(ADMM) and Primal-Dual splitting methods. For these methods, we also develop
easily verifiable termination criteria for finding an approximate solution,
which can be seen as a generalization of the termination criterion for the
classical gradient descent method. We finally develop a parallel analysis for
the non-stationary Krasnosel'skii-Mann iteration. The usefulness of our results
is illustrated by applying them to a large class of structured monotone
inclusion and convex optimization problems. Experiments on some large scale
inverse problems in signal and image processing problems are shown.Comment: This is an extended version of the work presented in
http://arxiv.org/abs/1310.6636, and is accepted by the Mathematical
Programmin
On the Global Linear Convergence of the ADMM with Multi-Block Variables
The alternating direction method of multipliers (ADMM) has been widely used
for solving structured convex optimization problems. In particular, the ADMM
can solve convex programs that minimize the sum of convex functions with
-block variables linked by some linear constraints. While the convergence of
the ADMM for was well established in the literature, it remained an open
problem for a long time whether or not the ADMM for is still
convergent. Recently, it was shown in [3] that without further conditions the
ADMM for may actually fail to converge. In this paper, we show that
under some easily verifiable and reasonable conditions the global linear
convergence of the ADMM when can still be assured, which is important
since the ADMM is a popular method for solving large scale multi-block
optimization models and is known to perform very well in practice even when
. Our study aims to offer an explanation for this phenomenon
- …