2 research outputs found

    Limits on Sparse Support Recovery via Linear Sketching with Random Expander Matrices

    Get PDF
    Linear sketching is a powerful tool for the problem of sparse signal recovery, having numerous applications such as compressive sensing, data stream computing, graph sketching, and routing. Motivated by applications where the \emph{positions} of the non-zero entries in a sparse vector are of primary interest, we consider the problem of \emph{support recovery} from a linear sketch taking the form \Yv = \Xv\beta + \Zv. We focus on a widely-used expander-based construction in the columns of the measurement matrix \Xv \in \RR^{n \times p} are random permutations of a sparse binary vector containing d≪nd \ll n ones and n−dn-d zeros. We provide a sharp characterization of the number of measurements required for an information-theoretically optimal decoder, thus permitting a precise comparison to the i.i.d.~Gaussian construction. Our findings reveal both positive and negative results, showing that the performance nearly matches the Gaussian construction at moderate-to-high noise levels, while being worse by an arbitrarily large factor at low noise levels

    Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework

    Get PDF
    The support recovery problem consists of determining a sparse subset of a set of variables that is relevant in generating a set of observations, and arises in a diverse range of settings such as compressive sensing, and subset selection in regression, and group testing. In this paper, we take a unified approach to support recovery problems, considering general probabilistic models relating a sparse data vector to an observation vector. We study the information-theoretic limits of both exact and partial support recovery, taking a novel approach motivated by thresholding techniques in channel coding. We provide general achievability and converse bounds characterizing the trade-off between the error probability and number of measurements, and we specialize these to the linear, 1-bit, and group testing models. In several cases, our bounds not only provide matching scaling laws in the necessary and sufficient number of measurements, but also sharp thresholds with matching constant factors. Our approach has several advantages over previous approaches: For the achievability part, we obtain sharp thresholds under broader scalings of the sparsity level and other parameters (e.g., signal-to-noise ratio) compared to several previous works, and for the converse part, we not only provide conditions under which the error probability fails to vanish, but also conditions under which it tends to one.Comment: Accepted to IEEE Transactions on Information Theory; presented in part at ISIT 2015 and SODA 201
    corecore