29 research outputs found

    Two-Part Reconstruction with Noisy-Sudocodes

    Get PDF
    We develop a two-part reconstruction framework for signal recovery in compressed sensing (CS), where a fast algorithm is applied to provide partial recovery in Part 1, and a CS algorithm is applied to complete the residual problem in Part 2. Partitioning the reconstruction process into two complementary parts provides a natural trade-off between runtime and reconstruction quality. To exploit the advantages of the two-part framework, we propose a Noisy-Sudocodes algorithm that performs two-part reconstruction of sparse signals in the presence of measurement noise. Specifically, we design a fast algorithm for Part 1 of Noisy-Sudocodes that identifies the zero coefficients of the input signal from its noisy measurements. Many existing CS algorithms could be applied to Part 2, and we investigate approximate message passing (AMP) and binary iterative hard thresholding (BIHT). For Noisy-Sudocodes with AMP in Part 2, we provide a theoretical analysis that characterizes the trade-off between runtime and reconstruction quality. In a 1-bit CS setting where a new 1-bit quantizer is constructed for Part 1 and BIHT is applied to Part 2, numerical results show that the Noisy-Sudocodes algorithm improves over BIHT in both runtime and reconstruction qualit

    Two-Part Reconstruction in Compressed Sensing

    Get PDF
    Two-part reconstruction is a framework for signal recovery in compressed sensing (CS), in which the advantages of two different algorithms are combined. Our framework allow s to accelerate the reconstruction procedure without compromising the reconstruction quality. To illustrate the efficacy of ou r two-part approach, we extend the author’s previous Sudocodes algorithm and make it robust to measurement noise. In a 1- bit CS setting, promising numerical results indicate that our algorithm offers both a reduction in run-time and improvement in reconstruction qualit

    Compressed sensing using sparse binary measurements: a rateless coding perspective

    Get PDF
    Compressed Sensing (CS) methods using sparse binary measurement matrices and iterative message-passing re- covery procedures have been recently investigated due to their low computational complexity and excellent performance. Drawing much of inspiration from sparse-graph codes such as Low-Density Parity-Check (LDPC) codes, these studies use analytical tools from modern coding theory to analyze CS solutions. In this paper, we consider and systematically analyze the CS setup inspired by a class of efficient, popular and flexible sparse-graph codes called rateless codes. The proposed rateless CS setup is asymptotically analyzed using tools such as Density Evolution and EXIT charts and fine-tuned using degree distribution optimization techniques

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    A robust parallel algorithm for combinatorial compressed sensing

    Full text link
    In previous work two of the authors have shown that a vector xRnx \in \mathbb{R}^n with at most k<nk < n nonzeros can be recovered from an expander sketch AxAx in O(nnz(A)logk)\mathcal{O}(\mathrm{nnz}(A)\log k) operations via the Parallel-0\ell_0 decoding algorithm, where nnz(A)\mathrm{nnz}(A) denotes the number of nonzero entries in ARm×nA \in \mathbb{R}^{m \times n}. In this paper we present the Robust-0\ell_0 decoding algorithm, which robustifies Parallel-0\ell_0 when the sketch AxAx is corrupted by additive noise. This robustness is achieved by approximating the asymptotic posterior distribution of values in the sketch given its corrupted measurements. We provide analytic expressions that approximate these posteriors under the assumptions that the nonzero entries in the signal and the noise are drawn from continuous distributions. Numerical experiments presented show that Robust-0\ell_0 is superior to existing greedy and combinatorial compressed sensing algorithms in the presence of small to moderate signal-to-noise ratios in the setting of Gaussian signals and Gaussian additive noise

    Topics in Compressed Sensing

    Get PDF
    Compressed sensing has a wide range of applications that include error correction, imaging, radar and many more. Given a sparse signal in a high dimensional space, one wishes to reconstruct that signal accurately and efficiently from a number of linear measurements much less than its actual dimension. Although in theory it is clear that this is possible, the difficulty lies in the construction of algorithms that perform the recovery efficiently, as well as determining which kind of linear measurements allow for the reconstruction. There have been two distinct major approaches to sparse recovery that each present different benefits and shortcomings. The first, L1-minimization methods such as Basis Pursuit, use a linear optimization problem to recover the signal. This method provides strong guarantees and stability, but relies on Linear Programming, whose methods do not yet have strong polynomially bounded runtimes. The second approach uses greedy methods that compute the support of the signal iteratively. These methods are usually much faster than Basis Pursuit, but until recently had not been able to provide the same guarantees. This gap between the two approaches was bridged when we developed and analyzed the greedy algorithm Regularized Orthogonal Matching Pursuit (ROMP). ROMP provides similar guarantees to Basis Pursuit as well as the speed of a greedy algorithm. Our more recent algorithm Compressive Sampling Matching Pursuit (CoSaMP) improves upon these guarantees, and is optimal in every important aspect

    Further Results on Performance Analysis for Compressive Sensing Using Expander Graphs

    Get PDF
    Compressive sensing is an emerging technology which can recover a sparse signal vector of dimension n via a much smaller number of measurements than n. In this paper, we will give further results on the performance bounds of compressive sensing. We consider the newly proposed expander graph based compressive sensing schemes and show that, similar to the l_1 minimization case, we can exactly recover any k-sparse signal using only O(k log(n)) measurements, where k is the number of nonzero elements. The number of computational iterations is of order O(k log(n)), while each iteration involves very simple computational steps

    Ultra Low-Complexity Detection of Spectrum Holes in Compressed Wideband Spectrum Sensing

    Full text link
    Wideband spectrum sensing is a significant challenge in cognitive radios (CRs) due to requiring very high-speed analog- to-digital converters (ADCs), operating at or above the Nyquist rate. Here, we propose a very low-complexity zero-block detection scheme that can detect a large fraction of spectrum holes from the sub-Nyquist samples, even when the undersampling ratio is very small. The scheme is based on a block sparse sensing matrix, which is implemented through the design of a novel analog-to- information converter (AIC). The proposed scheme identifies some measurements as being zero and then verifies the sub-channels associated with them as being vacant. Analytical and simulation results are presented that demonstrate the effectiveness of the proposed method in reliable detection of spectrum holes with complexity much lower than existing schemes. This work also introduces a new paradigm in compressed sensing where one is interested in reliable detection of (some of the) zero blocks rather than the recovery of the whole block sparse signal.Comment: 7 pages, 5 figure

    Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays

    Get PDF
    Microarrays (DNA, protein, etc.) are massively parallel affinity-based biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the so-called compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linear-programming-based methods, and can also recover signals with less sparsity
    corecore