17 research outputs found

    High-Dimensional Screening Using Multiple Grouping of Variables

    Full text link
    Screening is the problem of finding a superset of the set of non-zero entries in an unknown p-dimensional vector \beta* given n noisy observations. Naturally, we want this superset to be as small as possible. We propose a novel framework for screening, which we refer to as Multiple Grouping (MuG), that groups variables, performs variable selection over the groups, and repeats this process multiple number of times to estimate a sequence of sets that contains the non-zero entries in \beta*. Screening is done by taking an intersection of all these estimated sets. The MuG framework can be used in conjunction with any group based variable selection algorithm. In the high-dimensional setting, where p >> n, we show that when MuG is used with the group Lasso estimator, screening can be consistently performed without using any tuning parameter. Our numerical simulations clearly show the merits of using the MuG framework in practice.Comment: This paper will appear in the IEEE Transactions on Signal Processing. See http://www.ima.umn.edu/~dvats/MuGScreening.html for more detail

    A Primal-dual Framework For Mixtures Of Regularisers

    Get PDF
    Effectively solving many inverse problems in engineering requires to leverage all possible prior information about the structure of the signal to be estimated. This often leads to tackling constrained optimization problems with mixtures of regularizers. Providing a general purpose optimization algorithm for these cases, with both guaranteed convergence rate as well as fast implementation remains an important challenge. In this paper, we describe how a recent primaldual algorithm for non-smooth constrained optimization can be successfully used to tackle these problems. Its simple iterations can be easily parallelized, allowing very efficient computations. Furthermore, the algorithm is guaranteed to achieve an optimal convergence rate for this class of problems. We illustrate its performance on two problems, a compressive magnetic resonance imaging application and an approach for improving the quality of analog-to-digital conversion of amplitude-modulated signals

    Tractability of interpretability via selection of group-sparse models

    Get PDF
    Group-based sparsity models are proven instrumental in linear regression problems for recovering signals from much fewer measurements than standard compressive sensing. A promise of these models is to lead to “interpretable” signals for which we identify its constituent groups, however we show that, in general, claims of correctly identifying the groups with convex relaxations would lead to polynomial time solution algorithms for an NP-hard problem. Instead, leveraging a graph-based understanding of group models, we describe group structures which enable correct model identification in polynomial time via dynamic programming. We also show that group structures that lead to totally unimodular constraints have tractable relaxations. Finally, we highlight the non-convexity of the Pareto frontier of group-sparse approximations and what it means for tractability

    Hybrid approximate message passing

    Full text link
    Gaussian and quadratic approximations of message passing algorithms on graphs have attracted considerable recent attention due to their computational simplicity, analytic tractability, and wide applicability in optimization and statistical inference problems. This paper presents a systematic framework for incorporating such approximate message passing (AMP) methods in general graphical models. The key concept is a partition of dependencies of a general graphical model into strong and weak edges, with the weak edges representing interactions through aggregates of small, linearizable couplings of variables. AMP approximations based on the Central Limit Theorem can be readily applied to aggregates of many weak edges and integrated with standard message passing updates on the strong edges. The resulting algorithm, which we call hybrid generalized approximate message passing (HyGAMP), can yield significantly simpler implementations of sum-product and max-sum loopy belief propagation. By varying the partition of strong and weak edges, a performance--complexity trade-off can be achieved. Group sparsity and multinomial logistic regression problems are studied as examples of the proposed methodology.The work of S. Rangan was supported in part by the National Science Foundation under Grants 1116589, 1302336, and 1547332, and in part by the industrial affiliates of NYU WIRELESS. The work of A. K. Fletcher was supported in part by the National Science Foundation under Grants 1254204 and 1738286 and in part by the Office of Naval Research under Grant N00014-15-1-2677. The work of V. K. Goyal was supported in part by the National Science Foundation under Grant 1422034. The work of E. Byrne and P. Schniter was supported in part by the National Science Foundation under Grant CCF-1527162. (1116589 - National Science Foundation; 1302336 - National Science Foundation; 1547332 - National Science Foundation; 1254204 - National Science Foundation; 1738286 - National Science Foundation; 1422034 - National Science Foundation; CCF-1527162 - National Science Foundation; NYU WIRELESS; N00014-15-1-2677 - Office of Naval Research

    Sparse Group Covers and Greedy Tree Approximations

    Get PDF
    We consider the problem of finding a K-sparse approximation of a signal, such that the support of the approximation is the union of sets from a given collection, a.k.a. group structure. This problem subsumes the one of finding K-sparse tree approximations. We discuss the tractability of this problem, present a polynomial-time dynamic program for special group structures and propose two novel greedy algorithms with efficient implementations. The first is based on submodular function maximization with knapsack constraints. For the case of tree sparsity, its approximation ratio of 1-1/e is better than current state-of-the-art approximate algorithms. The second algorithm leverages ideas from the greedy algorithm for the Budgeted Maximum Coverage problem and obtains excellent empirical performance, shown by computing the full Pareto frontier of the tree approximations of the wavelet coefficients of an image

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Tractability of interpretability via selection of group-sparse models

    Full text link

    A Simple Gaussian Measurement Bound for Exact Recovery of Block-Sparse Signals

    Get PDF
    We present a probabilistic analysis on conditions of the exact recovery of block-sparse signals whose nonzero elements appear in fixed blocks. We mainly derive a simple lower bound on the necessary number of Gaussian measurements for exact recovery of such block-sparse signals via the mixed l2/lq  (0<q≤1) norm minimization method. In addition, we present numerical examples to partially support the correctness of the theoretical results. The obtained results extend those known for the standard lq minimization and the mixed l2/l1 minimization methods to the mixed l2/lq  (0<q≤1) minimization method in the context of block-sparse signal recovery
    corecore