4,984 research outputs found

    High-performance Kernel Machines with Implicit Distributed Optimization and Randomization

    Full text link
    In order to fully utilize "big data", it is often required to use "big models". Such models tend to grow with the complexity and size of the training data, and do not make strong parametric assumptions upfront on the nature of the underlying statistical dependencies. Kernel methods fit this need well, as they constitute a versatile and principled statistical methodology for solving a wide range of non-parametric modelling problems. However, their high computational costs (in storage and time) pose a significant barrier to their widespread adoption in big data applications. We propose an algorithmic framework and high-performance implementation for massive-scale training of kernel-based statistical models, based on combining two key technical ingredients: (i) distributed general purpose convex optimization, and (ii) the use of randomization to improve the scalability of kernel methods. Our approach is based on a block-splitting variant of the Alternating Directions Method of Multipliers, carefully reconfigured to handle very large random feature matrices, while exploiting hybrid parallelism typically found in modern clusters of multicore machines. Our implementation supports a variety of statistical learning tasks by enabling several loss functions, regularization schemes, kernels, and layers of randomized approximations for both dense and sparse datasets, in a highly extensible framework. We evaluate the ability of our framework to learn models on data from applications, and provide a comparison against existing sequential and parallel libraries.Comment: Work presented at MMDS 2014 (June 2014) and JSM 201

    A Geometric View on Constrained M-Estimators

    Get PDF
    We study the estimation error of constrained M-estimators, and derive explicit upper bounds on the expected estimation error determined by the Gaussian width of the constraint set. Both of the cases where the true parameter is on the boundary of the constraint set (matched constraint), and where the true parameter is strictly in the constraint set (mismatched constraint) are considered. For both cases, we derive novel universal estimation error bounds for regression in a generalized linear model with the canonical link function. Our error bound for the mismatched constraint case is minimax optimal in terms of its dependence on the sample size, for Gaussian linear regression by the Lasso

    Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems over Large Graphs

    Full text link
    A regularized optimization problem over a large unstructured graph is studied, where the regularization term is tied to the graph geometry. Typical regularization examples include the total variation and the Laplacian regularizations over the graph. When applying the proximal gradient algorithm to solve this problem, there exist quite affordable methods to implement the proximity operator (backward step) in the special case where the graph is a simple path without loops. In this paper, an algorithm, referred to as "Snake", is proposed to solve such regularized problems over general graphs, by taking benefit of these fast methods. The algorithm consists in properly selecting random simple paths in the graph and performing the proximal gradient algorithm over these simple paths. This algorithm is an instance of a new general stochastic proximal gradient algorithm, whose convergence is proven. Applications to trend filtering and graph inpainting are provided among others. Numerical experiments are conducted over large graphs

    Robust Phase Unwrapping by Convex Optimization

    Full text link
    The 2-D phase unwrapping problem aims at retrieving a "phase" image from its modulo 2Ď€2\pi observations. Many applications, such as interferometry or synthetic aperture radar imaging, are concerned by this problem since they proceed by recording complex or modulated data from which a "wrapped" phase is extracted. Although 1-D phase unwrapping is trivial, a challenge remains in higher dimensions to overcome two common problems: noise and discontinuities in the true phase image. In contrast to state-of-the-art techniques, this work aims at simultaneously unwrap and denoise the phase image. We propose a robust convex optimization approach that enforces data fidelity constraints expressed in the corrupted phase derivative domain while promoting a sparse phase prior. The resulting optimization problem is solved by the Chambolle-Pock primal-dual scheme. We show that under different observation noise levels, our approach compares favorably to those that perform the unwrapping and denoising in two separate steps.Comment: 6 pages, 4 figures, submitted in ICIP1
    • …
    corecore