20,523 research outputs found

    Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property

    Full text link
    Compressed Sensing aims to capture attributes of kk-sparse signals using very few measurements. In the standard Compressed Sensing paradigm, the \m\times \n measurement matrix \A is required to act as a near isometry on the set of all kk-sparse signals (Restricted Isometry Property or RIP). Although it is known that certain probabilistic processes generate \m \times \n matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix \A has this property, crucial for the feasibility of the standard recovery algorithms. In contrast this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of kk-sparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. We require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sub-linear in \n, and only quadratic in \m; the focus on expected performance is more typical of mainstream signal processing than the worst-case analysis that prevails in standard Compressed Sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and extended BCH codes.Comment: 16 Pages, 2 figures, to appear in IEEE Journal of Selected Topics in Signal Processing, the special issue on Compressed Sensin

    Compressive Sensing

    Get PDF
    Compressive sensing is a novel paradigm for acquiring signals and has a wide range of applications. The basic assumption is that one can recover a sparse or compressible signal from far fewer measurements than traditional methods. The difficulty lies in the construction of efficient recovery algorithms. In this thesis, we review two main approaches for solving the sparse recovery problem in compressive sensing: l1-minimization methods and greedy methods. Our contribution is that we look at compressive sensing from a different point of view by connecting it with sparse interpolation. We introduce a new algorithm for compressive sensing called generalized eigenvalues (GE). GE uses the first m consecutive rows of discrete Fourier matrix as its measurement matrix. GE solves for the support of a sparse signal directly by considering generalized eigenvalues of Hankel systems. Under Fourier measurements, we compare GE with iterated hard thresholding (IHT) that is one of the state-of-the-art greedy algorithms. Our experiment shows that GE has a much higher probability of success than IHT when the number of measurements is small while GE is a bit more sensitive for signals with clustered entries. To address this problem, we give some observations from the experiment that suggests GE can be potentially improved by taking adaptive Fourier measurements. In addition, most greedy algorithms assume that the sparsity k is known. As sparsity depends on the signal and we may not be able to know the sparsity unless we have some prior information about the signal. However, GE doesn\u27t need any prior information on the sparsity and can determine the sparsity by simply computing the rank of the Hankel system

    Efficient and Robust Compressed Sensing Using Optimized Expander Graphs

    Get PDF
    Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any n-dimensional vector that is k-sparse can be fully recovered using O(klog n) measurements and only O(klog n) simple recovery iterations. In this paper, we improve upon this result by considering expander graphs with expansion coefficient beyond 3/4 and show that, with the same number of measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be reduced arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple priority queue with total recovery time O(nlog(n/k))). We also show that by tolerating a small penal- ty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expander-graph-based methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the time complexity and the simplicity of recovery. Finally, we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost k-sparse signal and then, using very simple optimization techniques, finds a k-sparse signal which is close to the best k-term approximation of the original signal

    Algorithmic linear dimension reduction in the l_1 norm for sparse vectors

    Get PDF
    This paper develops a new method for recovering m-sparse signals that is simultaneously uniform and quick. We present a reconstruction algorithm whose run time, O(m log^2(m) log^2(d)), is sublinear in the length d of the signal. The reconstruction error is within a logarithmic factor (in m) of the optimal m-term approximation error in l_1. In particular, the algorithm recovers m-sparse signals perfectly and noisy signals are recovered with polylogarithmic distortion. Our algorithm makes O(m log^2 (d)) measurements, which is within a logarithmic factor of optimal. We also present a small-space implementation of the algorithm. These sketching techniques and the corresponding reconstruction algorithms provide an algorithmic dimension reduction in the l_1 norm. In particular, vectors of support m in dimension d can be linearly embedded into O(m log^2 d) dimensions with polylogarithmic distortion. We can reconstruct a vector from its low-dimensional sketch in time O(m log^2(m) log^2(d)). Furthermore, this reconstruction is stable and robust under small perturbations

    Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays

    Get PDF
    Microarrays (DNA, protein, etc.) are massively parallel affinity-based biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the so-called compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linear-programming-based methods, and can also recover signals with less sparsity
    corecore