49 research outputs found

    Parameter Selection and Pre-Conditioning for a Graph Form Solver

    Full text link
    In a recent paper, Parikh and Boyd describe a method for solving a convex optimization problem, where each iteration involves evaluating a proximal operator and projection onto a subspace. In this paper we address the critical practical issues of how to select the proximal parameter in each iteration, and how to scale the original problem variables, so as the achieve reliable practical performance. The resulting method has been implemented as an open-source software package called POGS (Proximal Graph Solver), that targets multi-core and GPU-based systems, and has been tested on a wide variety of practical problems. Numerical results show that POGS can solve very large problems (with, say, more than a billion coefficients in the data), to modest accuracy in a few tens of seconds. As just one example, a radiation treatment planning problem with around 100 million coefficients in the data can be solved in a few seconds, as compared to around one hour with an interior-point method.Comment: 28 pages, 1 figure, 1 open source implementatio

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Frontiers in Nonparametric Statistics

    Get PDF
    The goal of this workshop was to discuss recent developments of nonparametric statistical inference. A particular focus was on high dimensional statistics, semiparametrics, adaptation, nonparametric bayesian statistics, shape constraint estimation and statistical inverse problems. The close interaction of these issues with optimization, machine learning and inverse problems has been addressed as well

    Unsupervised Representative Selection and Signal Unmixing

    Get PDF
    This thesis presents unsupervised machine learning algorithms to tackle two related problems: selecting representatives in a dataset and identifying constituent components in mixture data. In both problems, we aim to reveal a few key hidden features that sufficiently explain the data. The main intuition behind our algorithms is that, in an appropriately constructed dictionary, a sparse representation of the data corresponds to selecting these unknown features. Our goal is to efficiently seek such sparse representations under suitable conditions. In the representative selection problem, our objective is to pick a few representative data points that capture distinguished characteristics of a dataset. This corresponds to identifying the vertices of the polytope generated by the data. To do so, we start by modeling each data point as a convex combination of the polytope vertices. Then, in the dictionary formed by the dataset itself, we look for sparse representations of the data which subsequently imply the vertices. To seek such sparse representations, we proposed a greedy pursuit algorithm and a non-convex entropy minimization algorithm. We theoretically justify our proposed algorithms and demonstrate their vertex recovery performance on both synthetic and real data. In the unmixing problem, we assume that each data point is a mixture of a few unknown components, and we wish to decompose data into these underlying constituents. We consider a highly under-sampled regime in which the number of measurements is far less than the data dimension. Furthermore, we solve an even more challenging unmixing problem in which the under-sampled mixture are indirectly observed via a nonlinear operator such as Sigmoid and Relu. To find the unknown constituents, we form a dictionaries with atoms resembling the constituents and seek the sparse representations corresponding to them. We proposed a fast and robust greedy algorithm, called UnmixMP, to find such sparse representations. We prove its robust unmixing performance and support our theoretical analysis by various experiments on both synthetic and real image data. Our algorithms are fast and robust, and supported by rigorous theoretical analysis. Our experimental results shows that the proposed are significantly more robust than state-of-the-art representative selection and unmixing algorithms in the aforementioned settings

    The Convex Geometry of Linear Inverse Problems

    Get PDF
    In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases such as sparse vectors and low-rank matrices, as well as several others including sums of a few permutations matrices, low-rank tensors, orthogonal matrices, and atomic measures. The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery. Thus this work extends the catalog of simple models that can be recovered from limited linear information via tractable convex programming
    corecore