73 research outputs found

    A semi-proximal-based strictly contractive Peaceman-Rachford splitting method

    Full text link
    The Peaceman-Rachford splitting method is very efficient for minimizing sum of two functions each depends on its variable, and the constraint is a linear equality. However, its convergence was not guaranteed without extra requirements. Very recently, He et al. (SIAM J. Optim. 24: 1011 - 1040, 2014) proved the convergence of a strictly contractive Peaceman-Rachford splitting method by employing a suitable underdetermined relaxation factor. In this paper, we further extend the so-called strictly contractive Peaceman-Rachford splitting method by using two different relaxation factors, and to make the method more flexible, we introduce semi-proximal terms to the subproblems. We characterize the relation of these two factors, and show that one factor is always underdetermined while the other one is allowed to be larger than 1. Such a flexible conditions makes it possible to cover the Glowinski's ADMM whith larger stepsize. We show that the proposed modified strictly contractive Peaceman-Rachford splitting method is convergent and also prove O(1/t)O(1/t) convergence rate in ergodic and nonergodic sense, respectively. The numerical tests on an extensive collection of problems demonstrate the efficiency of the proposed method

    Scalable Peaceman-Rachford Splitting Method with Proximal Terms

    Full text link
    Along with developing of Peaceman-Rachford Splittling Method (PRSM), many batch algorithms based on it have been studied very deeply. But almost no algorithm focused on the performance of stochastic version of PRSM. In this paper, we propose a new stochastic algorithm based on PRSM, prove its convergence rate in ergodic sense, and test its performance on both artificial and real data. We show that our proposed algorithm, Stochastic Scalable PRSM (SS-PRSM), enjoys the O(1/K)O(1/K) convergence rate, which is the same as those newest stochastic algorithms that based on ADMM but faster than general Stochastic ADMM (which is O(1/K)O(1/\sqrt{K})). Our algorithm also owns wide flexibility, outperforms many state-of-the-art stochastic algorithms coming from ADMM, and has low memory cost in large-scale splitting optimization problems

    Tight Global Linear Convergence Rate Bounds for Douglas-Rachford Splitting

    Full text link
    Recently, several authors have shown local and global convergence rate results for Douglas-Rachford splitting under strong monotonicity, Lipschitz continuity, and cocoercivity assumptions. Most of these focus on the convex optimization setting. In the more general monotone inclusion setting, Lions and Mercier showed a linear convergence rate bound under the assumption that one of the two operators is strongly monotone and Lipschitz continuous. We show that this bound is not tight, meaning that no problem from the considered class converges exactly with that rate. In this paper, we present tight global linear convergence rate bounds for that class of problems. We also provide tight linear convergence rate bounds under the assumptions that one of the operators is strongly monotone and cocoercive, and that one of the operators is strongly monotone and the other is cocoercive. All our linear convergence results are obtained by proving the stronger property that the Douglas-Rachford operator is contractive

    Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM

    Full text link
    Recently, several convergence rate results for Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) have been presented in the literature. In this paper, we show global linear convergence rate bounds for Douglas-Rachford splitting and ADMM under strong convexity and smoothness assumptions. We further show that the rate bounds are tight for the class of problems under consideration for all feasible algorithm parameters. For problems that satisfy the assumptions, we show how to select step-size and metric for the algorithm that optimize the derived convergence rate bounds. For problems with a similar structure that do not satisfy the assumptions, we present heuristic step-size and metric selection methods

    A faster prediction-correction framework for solving convex optimization problems

    Full text link
    He and Yuan's prediction-correction framework [SIAM J. Numer. Anal. 50: 700-709, 2012] is able to provide convergent algorithms for solving convex optimization problems at a rate of O(1/t)O(1/t) in both ergodic and pointwise senses. This paper presents a faster prediction-correction framework at a rate of O(1/t)O(1/t) in the non-ergodic sense and O(1/t2)O(1/t^2) in the pointwise sense, {\it without any additional assumptions}. Interestingly, it provides a faster algorithm for solving {\it multi-block} separable convex optimization problems with linear equality or inequality constraints
    • …
    corecore