1,333 research outputs found

    Sparse Recovery of Nonnegative Signals With Minimal Expansion

    Get PDF
    We investigate the problem of reconstructing a high-dimensional nonnegative sparse vector from lower-dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes can be more efficient both with respect to signal sensing as well as reconstruction complexity. Known constructions use the adjacency matrices of expander graphs, which often lead to recovery algorithms which are much more efficient than l_1 minimization. However, prior constructions of sparse measurement matrices rely on expander graphs with very high expansion coefficients which make the construction of such graphs difficult and the size of the recoverable sets very small. In this paper, we introduce sparse measurement matrices for the recovery of nonnegative vectors, using perturbations of the adjacency matrices of expander graphs requiring much smaller expansion coefficients, hereby referred to as minimal expanders. We show that when l_1 minimization is used as the reconstruction method, these constructions allow the recovery of signals that are almost three orders of magnitude larger compared to the existing theoretical results for sparse measurement matrices. We provide for the first time tight upper bounds for the so called weak and strong recovery thresholds when l_1 minimization is used. We further show that the success of l_1 optimization is equivalent to the existence of a “unique” vector in the set of solutions to the linear equations, which enables alternative algorithms for l_1 minimization. We further show that the defined minimal expansion property is necessary for all measurement matrices for compressive sensing, (even when the non-negativity assumption is removed) therefore implying that our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much more computationally efficient compared to l_1 minimization

    Sparse Recovery of Positive Signals with Minimal Expansion

    Get PDF
    We investigate the sparse recovery problem of reconstructing a high-dimensional non-negative sparse vector from lower dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes are crucial in applications, such as DNA microarrays and sensor networks, where dense measurements are not practically feasible. One possible construction uses the adjacency matrices of expander graphs, which often leads to recovery algorithms much more efficient than 1\ell_1 minimization. However, to date, constructions based on expanders have required very high expansion coefficients which can potentially make the construction of such graphs difficult and the size of the recoverable sets small. In this paper, we construct sparse measurement matrices for the recovery of non-negative vectors, using perturbations of the adjacency matrix of an expander graph with much smaller expansion coefficient. We present a necessary and sufficient condition for 1\ell_1 optimization to successfully recover the unknown vector and obtain expressions for the recovery threshold. For certain classes of measurement matrices, this necessary and sufficient condition is further equivalent to the existence of a "unique" vector in the constraint set, which opens the door to alternative algorithms to 1\ell_1 minimization. We further show that the minimal expansion we use is necessary for any graph for which sparse recovery is possible and that therefore our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much faster than 1\ell_1 optimization. Finally, we demonstrate through theoretical bounds, as well as simulation, that our method is robust to noise and approximate sparsity.Comment: 25 pages, submitted for publicatio

    A Simple Message-Passing Algorithm for Compressed Sensing

    Full text link
    We consider the recovery of a nonnegative vector x from measurements y = Ax, where A is an m-by-n matrix whos entries are in {0, 1}. We establish that when A corresponds to the adjacency matrix of a bipartite graph with sufficient expansion, a simple message-passing algorithm produces an estimate \hat{x} of x satisfying ||x-\hat{x}||_1 \leq O(n/k) ||x-x(k)||_1, where x(k) is the best k-sparse approximation of x. The algorithm performs O(n (log(n/k))^2 log(k)) computation in total, and the number of measurements required is m = O(k log(n/k)). In the special case when x is k-sparse, the algorithm recovers x exactly in time O(n log(n/k) log(k)). Ultimately, this work is a further step in the direction of more formally developing the broader role of message-passing algorithms in solving compressed sensing problems

    Compressed sensing performance bounds under Poisson noise

    Full text link
    This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is non-additive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical 2\ell_2--1\ell_1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the signal-dependent part of the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.Comment: 12 pages, 3 pdf figures; accepted for publication in IEEE Transactions on Signal Processin

    Critical Parameter Values and Reconstruction Properties of Discrete Tomography: Application to Experimental Fluid Dynamics

    Full text link
    We analyze representative ill-posed scenarios of tomographic PIV with a focus on conditions for unique volume reconstruction. Based on sparse random seedings of a region of interest with small particles, the corresponding systems of linear projection equations are probabilistically analyzed in order to determine (i) the ability of unique reconstruction in terms of the imaging geometry and the critical sparsity parameter, and (ii) sharpness of the transition to non-unique reconstruction with ghost particles when choosing the sparsity parameter improperly. The sparsity parameter directly relates to the seeding density used for PIV in experimental fluids dynamics that is chosen empirically to date. Our results provide a basic mathematical characterization of the PIV volume reconstruction problem that is an essential prerequisite for any algorithm used to actually compute the reconstruction. Moreover, we connect the sparse volume function reconstruction problem from few tomographic projections to major developments in compressed sensing.Comment: 22 pages, submitted to Fundamenta Informaticae. arXiv admin note: text overlap with arXiv:1208.589

    Summary Based Structures with Improved Sublinear Recovery for Compressed Sensing

    Get PDF
    We introduce a new class of measurement matrices for compressed sensing, using low order summaries over binary sequences of a given length. We prove recovery guarantees for three reconstruction algorithms using the proposed measurements, including 1\ell_1 minimization and two combinatorial methods. In particular, one of the algorithms recovers kk-sparse vectors of length NN in sublinear time poly(klogN)\text{poly}(k\log{N}), and requires at most Ω(klogNloglogN)\Omega(k\log{N}\log\log{N}) measurements. The empirical oversampling constant of the algorithm is significantly better than existing sublinear recovery algorithms such as Chaining Pursuit and Sudocodes. In particular, for 103N10810^3\leq N\leq 10^8 and k=100k=100, the oversampling factor is between 3 to 8. We provide preliminary insight into how the proposed constructions, and the fast recovery scheme can be used in a number of practical applications such as market basket analysis, and real time compressed sensing implementation
    corecore