9 research outputs found

    Smooth Primal-Dual Coordinate Descent Algorithms for Nonsmooth Convex Optimization

    Get PDF
    We propose a new randomized coordinate descent method for a convex optimization template with broad applications. Our analysis relies on a novel combination of four ideas applied to the primal-dual gap function: smoothing, acceleration, homotopy, and coordinate descent with non-uniform sampling. As a result, our method features the first convergence rate guarantees among the coordinate descent methods, that are the best-known under a variety of common structure assumptions on the template. We provide numerical evidence to support the theoretical results with a comparison to state-of-the-art algorithms.Comment: NIPS 201

    A generic coordinate descent solver for nonsmooth convex optimization

    Get PDF
    International audienceWe present a generic coordinate descent solver for the minimization of a nonsmooth convex objective with structure. The method can deal in particular with problems with linear constraints. The implementation makes use of efficient residual updates and automatically determines which dual variables should be duplicated. A list of basic functional atoms is pre-compiled for efficiency and a modelling language in Python allows the user to combine them at run time. So, the algorithm can be used to solve a large variety of problems including Lasso, sparse multinomial logistic regression, linear and quadratic programs

    A Conditional Gradient Framework for Composite Convex Minimization with Applications to Semidefinite Programming

    Get PDF
    We propose a conditional gradient framework for a composite convex minimization template with broad applications. Our approach combines the notions of smoothing and homotopy under the CGM framework, and provably achieves the optimal O(1/sqrt(k)) convergence rate. We demonstrate that the same rate holds if the linear subproblems are solved approximately with additive or multiplicative error. Specific applications of the framework include the non-smooth minimization semidefinite programming, minimization with linear inclusion constraints over a compact domain. We provide numerical evidence to demonstrate the benefits of the new framework

    On the convergence of stochastic primal-dual hybrid gradient

    Full text link
    In this paper, we analyze the recently proposed stochastic primal-dual hybrid gradient (SPDHG) algorithm and provide new theoretical results. In particular, we prove almost sure convergence of the iterates to a solution and linear convergence with standard step sizes, independent of strong convexity constants. Our assumption for linear convergence is metric subregularity, which is satisfied for smooth and strongly convex problems in addition to many nonsmooth and/or nonstrongly convex problems, such as linear programs, Lasso, and support vector machines. In the general convex case, we prove optimal sublinear rates for the ergodic sequence, without bounded domain assumptions. We also provide numerical evidence showing that SPDHG with standard step sizes shows favorable and robust practical performance against its specialized strongly convex variant SPDHG-ÎĽ\mu and other state-of-the-art algorithms including variance reduction methods and stochastic dual coordinate ascent
    corecore