35 research outputs found

    On the reconstruction of block-sparse signals with an optimal number of measurements

    Get PDF
    Let A be an M by N matrix (M < N) which is an instance of a real random Gaussian ensemble. In compressed sensing we are interested in finding the sparsest solution to the system of equations A x = y for a given y. In general, whenever the sparsity of x is smaller than half the dimension of y then with overwhelming probability over A the sparsest solution is unique and can be found by an exhaustive search over x with an exponential time complexity for any y. The recent work of Cand\'es, Donoho, and Tao shows that minimization of the L_1 norm of x subject to A x = y results in the sparsest solution provided the sparsity of x, say K, is smaller than a certain threshold for a given number of measurements. Specifically, if the dimension of y approaches the dimension of x, the sparsity of x should be K < 0.239 N. Here, we consider the case where x is d-block sparse, i.e., x consists of n = N / d blocks where each block is either a zero vector or a nonzero vector. Instead of L_1-norm relaxation, we consider the following relaxation min x \| X_1 \|_2 + \| X_2 \|_2 + ... + \| X_n \|_2, subject to A x = y where X_i = (x_{(i-1)d+1}, x_{(i-1)d+2}, ..., x_{i d}) for i = 1,2, ..., N. Our main result is that as n -> \infty, the minimization finds the sparsest solution to Ax = y, with overwhelming probability in A, for any x whose block sparsity is k/n < 1/2 - O(\epsilon), provided M/N > 1 - 1/d, and d = \Omega(\log(1/\epsilon)/\epsilon). The relaxation can be solved in polynomial time using semi-definite programming

    Analyzing Weighted â„“_1 Minimization for Sparse Recovery With Nonuniform Sparse Models

    Get PDF
    In this paper, we introduce a nonuniform sparsity model and analyze the performance of an optimized weighted â„“_1 minimization over that sparsity model. In particular, we focus on a model where the entries of the unknown vector fall into two sets, with entries of each set having a specific probability of being nonzero. We propose a weighted â„“_1 minimization recovery algorithm and analyze its performance using a Grassmann angle approach. We compute explicitly the relationship between the system parameters-the weights, the number of measurements, the size of the two sets, the probabilities of being nonzero-so that when i.i.d. random Gaussian measurement matrices are used, the weighted â„“_1 minimization recovers a randomly selected signal drawn from the considered sparsity model with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We demonstrate through rigorous analysis and simulations that for the case when the support of the signal can be divided into two different subclasses with unequal sparsity fractions, the weighted â„“_1 minimization outperforms the regular â„“_1 minimization substantially. We also generalize our results to signal vectors with an arbitrary number of subclasses for sparsity

    Distributed Data Aggregation for Sparse Recovery in Wireless Sensor Networks

    Get PDF
    We consider the approximate sparse recovery problem in Wireless Sensor Networks (WSNs) using Compressed Sensing/Compressive Sampling (CS). The goal is to recover the n \mbox{-}dimensional data values by querying only m≪nm \ll n sensors based on some linear projection of sensor readings. To solve this problem, a two-tiered sampling model is considered and a novel distributed compressive sparse sampling (DCSS) algorithm is proposed based on sparse binary CS measurement matrix. In the two-tiered sampling model, each sensor first samples the environment independently. Then the fusion center (FC), acting as a pseudo-sensor, samples the sensor network to select a subset of sensors (mm out of nn) that directly respond to the FC for data recovery purpose. The sparse binary matrix is designed using unbalanced expander graph which achieves the state-of-the-art performance for CS schemes. This binary matrix can be interpreted as a sensor selection matrix-whose fairness is analyzed. Extensive experiments on both synthetic and real data set show that by querying only the minimum amount of mm sensors using the DCSS algorithm, the CS recovery accuracy can be as good as dense measurement matrices (e.g., Gaussian, Fourier Scrambles). We also show that the sparse binary measurement matrix works well on compressible data which has the closest recovery result to the known best k\mbox{-}term approximation. The recovery is robust against noisy measurements. The sparsity and binary properties of the measurement matrix contribute, to a great extent, the reduction of the in-network communication cost as well as the computational burden

    Provable Sparse Tensor Decomposition

    Full text link
    We propose a novel sparse tensor decomposition method, namely Tensor Truncated Power (TTP) method, that incorporates variable selection into the estimation of decomposition components. The sparsity is achieved via an efficient truncation step embedded in the tensor power iteration. Our method applies to a broad family of high dimensional latent variable models, including high dimensional Gaussian mixture and mixtures of sparse regressions. A thorough theoretical investigation is further conducted. In particular, we show that the final decomposition estimator is guaranteed to achieve a local statistical rate, and further strengthen it to the global statistical rate by introducing a proper initialization procedure. In high dimensional regimes, the obtained statistical rate significantly improves those shown in the existing non-sparse decomposition methods. The empirical advantages of TTP are confirmed in extensive simulated results and two real applications of click-through rate prediction and high-dimensional gene clustering.Comment: To Appear in JRSS-
    corecore