10 research outputs found

    Precompact convergence of the nonconvex Primal–Dual Hybrid Gradient algorithm

    Get PDF
    The Primal–Dual Hybrid Gradient (PDHG) algorithm is a powerful algorithm used quite frequently in recent years for solving saddle-point optimization problems. The classical application considers convex functions, and it is well studied in literature. In this paper, we consider the convergence of an alternative formulation of the PDHG algorithm in the nonconvex case under the precompact assumption. The proofs are based on the Kurdyka–L ojasiewic functions, that cover a wide range of problems. A simple numerical experiment illustrates the convergence properties

    A universal accelerated primal-dual method for convex optimization problems

    Full text link
    This work presents a universal accelerated first-order primal-dual method for affinely constrained convex optimization problems. It can handle both Lipschitz and H\"{o}lder gradients but does not need to know the smoothness level of the objective function. In line search part, it uses dynamically decreasing parameters and produces approximate Lipschitz constant with moderate magnitude. In addition, based on a suitable discrete Lyapunov function and tight decay estimates of some differential/difference inequalities, a universal optimal mixed-type convergence rate is established. Some numerical tests are provided to confirm the efficiency of the proposed method

    Block-proximal methods with spatially adapted acceleration

    Full text link
    We study and develop (stochastic) primal--dual block-coordinate descent methods for convex problems based on the method due to Chambolle and Pock. Our methods have known convergence rates for the iterates and the ergodic gap: O(1/N2)O(1/N^2) if each block is strongly convex, O(1/N)O(1/N) if no convexity is present, and more generally a mixed rate O(1/N2)+O(1/N)O(1/N^2)+O(1/N) for strongly convex blocks, if only some blocks are strongly convex. Additional novelties of our methods include blockwise-adapted step lengths and acceleration, as well as the ability to update both the primal and dual variables randomly in blocks under a very light compatibility condition. In other words, these variants of our methods are doubly-stochastic. We test the proposed methods on various image processing problems, where we employ pixelwise-adapted acceleration

    A Communication-Efficient and Privacy-Aware Distributed Algorithm for Sparse PCA

    Full text link
    Sparse principal component analysis (PCA) improves interpretability of the classic PCA by introducing sparsity into the dimension-reduction process. Optimization models for sparse PCA, however, are generally non-convex, non-smooth and more difficult to solve, especially on large-scale datasets requiring distributed computation over a wide network. In this paper, we develop a distributed and centralized algorithm called DSSAL1 for sparse PCA that aims to achieve low communication overheads by adapting a newly proposed subspace-splitting strategy to accelerate convergence. Theoretically, convergence to stationary points is established for DSSAL1. Extensive numerical results show that DSSAL1 requires far fewer rounds of communication than state-of-the-art peer methods. In addition, we make the case that since messages exchanged in DSSAL1 are well-masked, the possibility of private-data leakage in DSSAL1 is much lower than in some other distributed algorithms

    Duality-based Higher-order Non-smooth Optimization on Manifolds

    Full text link
    We propose a method for solving non-smooth optimization problems on manifolds. In order to obtain superlinear convergence, we apply a Riemannian Semi-smooth Newton method to a non-smooth non-linear primal-dual optimality system based on a recent extension of Fenchel duality theory to Riemannian manifolds. We also propose an inexact version of the Riemannian Semi-smooth Newton method and prove conditions for local linear and superlinear convergence. Numerical experiments on l2-TV-like problems confirm superlinear convergence on manifolds with positive and negative curvature

    On the convergence of primal-dual hybrid gradient algorithm

    No full text
    © 2014 Society for Industrial and Applied Mathematics. The primal-dual hybrid gradient algorithm (PDHG) has been widely used, especially for some basic image processing models. In the literature, PDHGâs convergence was established only under some restrictive conditions on its step sizes. In this paper, we revisit PDHGâs convergence in the context of a saddle-point problem and try to better understand how to choose its step sizes. More specifically, we show by an extremely simple example that PDHG is not necessarily convergent even when the step sizes are fixed as tiny constants. We then show that PDHG with constant step sizes is indeed convergent if one of the functions of the saddle-point problem is strongly convex, a condition that does hold for some variational models in imaging. With this additional condition, we also establish a worst-case convergence rate measured by the iteration complexity for PDHG with constant step sizes.Link_to_subscribed_fulltex
    corecore