101 research outputs found
Peaceman-Rachford splitting for a class of nonconvex optimization problems
We study the applicability of the Peaceman-Rachford (PR) splitting method for
solving nonconvex optimization problems. When applied to minimizing the sum of
a strongly convex Lipschitz differentiable function and a proper closed
function, we show that if the strongly convex function has a large enough
strong convexity modulus and the step-size parameter is chosen below a
threshold that is computable, then any cluster point of the sequence generated,
if exists, will give a stationary point of the optimization problem. We also
give sufficient conditions guaranteeing boundedness of the sequence generated.
We then discuss one way to split the objective so that the proposed method can
be suitably applied to solving optimization problems with a coercive objective
that is the sum of a (not necessarily strongly) convex Lipschitz differentiable
function and a proper closed function; this setting covers a large class of
nonconvex feasibility problems and constrained least squares problems. Finally,
we illustrate the proposed algorithm numerically
On Convergence of Heuristics Based on Douglas-Rachford Splitting and ADMM to Minimize Convex Functions over Nonconvex Sets
Recently, heuristics based on the Douglas-Rachford splitting algorithm and
the alternating direction method of multipliers (ADMM) have found empirical
success in minimizing convex functions over nonconvex sets, but not much has
been done to improve the theoretical understanding of them. In this paper, we
investigate convergence of these heuristics. First, we characterize optimal
solutions of minimization problems involving convex cost functions over
nonconvex constraint sets. We show that these optimal solutions are related to
the fixed point set of the underlying nonconvex Douglas-Rachford operator.
Next, we establish sufficient conditions under which the Douglas-Rachford
splitting heuristic either converges to a point or its cluster points form a
nonempty compact connected set. In the case where the heuristic converges to a
point, we establish sufficient conditions for that point to be an optimal
solution. Then, we discuss how the ADMM heuristic can be constructed from the
Douglas-Rachford splitting algorithm. We show that, unlike in the convex case,
the algorithms in our nonconvex setup are not equivalent to each other and have
a rather involved relationship between them. Finally, we comment on convergence
of the ADMM heuristic and compare it with the Douglas-Rachford splitting
heuristic.Comment: 11 page
Scalable Peaceman-Rachford Splitting Method with Proximal Terms
Along with developing of Peaceman-Rachford Splittling Method (PRSM), many
batch algorithms based on it have been studied very deeply. But almost no
algorithm focused on the performance of stochastic version of PRSM. In this
paper, we propose a new stochastic algorithm based on PRSM, prove its
convergence rate in ergodic sense, and test its performance on both artificial
and real data. We show that our proposed algorithm, Stochastic Scalable PRSM
(SS-PRSM), enjoys the convergence rate, which is the same as those
newest stochastic algorithms that based on ADMM but faster than general
Stochastic ADMM (which is ). Our algorithm also owns wide
flexibility, outperforms many state-of-the-art stochastic algorithms coming
from ADMM, and has low memory cost in large-scale splitting optimization
problems
- …