1,629 research outputs found

    CoCoA: A General Framework for Communication-Efficient Distributed Optimization

    Get PDF
    The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets

    Discrete-Continuous ADMM for Transductive Inference in Higher-Order MRFs

    Full text link
    This paper introduces a novel algorithm for transductive inference in higher-order MRFs, where the unary energies are parameterized by a variable classifier. The considered task is posed as a joint optimization problem in the continuous classifier parameters and the discrete label variables. In contrast to prior approaches such as convex relaxations, we propose an advantageous decoupling of the objective function into discrete and continuous subproblems and a novel, efficient optimization method related to ADMM. This approach preserves integrality of the discrete label variables and guarantees global convergence to a critical point. We demonstrate the advantages of our approach in several experiments including video object segmentation on the DAVIS data set and interactive image segmentation

    Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

    Full text link
    Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness

    Primal-Dual Rates and Certificates

    Get PDF
    We propose an algorithm-independent framework to equip existing optimization methods with primal-dual certificates. Such certificates and corresponding rate of convergence guarantees are important for practitioners to diagnose progress, in particular in machine learning applications. We obtain new primal-dual convergence rates, e.g., for the Lasso as well as many L1, Elastic Net, group Lasso and TV-regularized problems. The theory applies to any norm-regularized generalized linear model. Our approach provides efficiently computable duality gaps which are globally defined, without modifying the original problems in the region of interest.Comment: appearing at ICML 2016 - Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 4

    A Unified Successive Pseudo-Convex Approximation Framework

    Get PDF
    In this paper, we propose a successive pseudo-convex approximation algorithm to efficiently compute stationary points for a large class of possibly nonconvex optimization problems. The stationary points are obtained by solving a sequence of successively refined approximate problems, each of which is much easier to solve than the original problem. To achieve convergence, the approximate problem only needs to exhibit a weak form of convexity, namely, pseudo-convexity. We show that the proposed framework not only includes as special cases a number of existing methods, for example, the gradient method and the Jacobi algorithm, but also leads to new algorithms which enjoy easier implementation and faster convergence speed. We also propose a novel line search method for nondifferentiable optimization problems, which is carried out over a properly constructed differentiable function with the benefit of a simplified implementation as compared to state-of-the-art line search techniques that directly operate on the original nondifferentiable objective function. The advantages of the proposed algorithm are shown, both theoretically and numerically, by several example applications, namely, MIMO broadcast channel capacity computation, energy efficiency maximization in massive MIMO systems and LASSO in sparse signal recovery.Comment: submitted to IEEE Transactions on Signal Processing; original title: A Novel Iterative Convex Approximation Metho

    Sum-Rate Maximization in Two-Way AF MIMO Relaying: Polynomial Time Solutions to a Class of DC Programming Problems

    Full text link
    Sum-rate maximization in two-way amplify-and-forward (AF) multiple-input multiple-output (MIMO) relaying belongs to the class of difference-of-convex functions (DC) programming problems. DC programming problems occur as well in other signal processing applications and are typically solved using different modifications of the branch-and-bound method. This method, however, does not have any polynomial time complexity guarantees. In this paper, we show that a class of DC programming problems, to which the sum-rate maximization in two-way MIMO relaying belongs, can be solved very efficiently in polynomial time, and develop two algorithms. The objective function of the problem is represented as a product of quadratic ratios and parameterized so that its convex part (versus the concave part) contains only one (or two) optimization variables. One of the algorithms is called POlynomial-Time DC (POTDC) and is based on semi-definite programming (SDP) relaxation, linearization, and an iterative search over a single parameter. The other algorithm is called RAte-maximization via Generalized EigenvectorS (RAGES) and is based on the generalized eigenvectors method and an iterative search over two (or one, in its approximate version) optimization variables. We also derive an upper-bound for the optimal values of the corresponding optimization problem and show by simulations that this upper-bound can be achieved by both algorithms. The proposed methods for maximizing the sum-rate in the two-way AF MIMO relaying system are shown to be superior to other state-of-the-art algorithms.Comment: 35 pages, 10 figures, Submitted to the IEEE Trans. Signal Processing in Nov. 201
    • …
    corecore