57 research outputs found

    Algorithms for Sparse Signal Recovery in Compressed Sensing

    Get PDF
    Compressed sensing and sparse signal modeling have attracted considerable research interest in recent years. The basic idea of compressed sensing is that by exploiting the sparsity of a signal one can accurately represent the signal using fewer samples than those required with traditional sampling. This thesis reviews the fundamental theoretical results in compressed sensing regarding the required number of measurements and the structure of the measurement system. The main focus of this thesis is on algorithms that accurately recover the original sparse signal from its compressed set of measurements. A number of greedy algorithms for sparse signal recovery are reviewed and numerically evaluated. Convergence properties and error bounds of some of these algorithms are also reviewed. The greedy approach to sparse signal recovery is further extended to multichannel sparse signal model. A widely-used non-Bayesian greedy algorithm for the joint recovery of multichannel sparse signals is reviewed. In cases where accurate prior information about the unknown sparse signals is available, Bayesian estimators are expected to outperform non-Bayesian estimators. A Bayesian minimum mean-squared error (MMSE) estimator of the multichannel sparse signals with Gaussian prior is derived in closed-form. Since computing the exact MMSE estimator is infeasible due to its combinatorial complexity, a novel algorithm for approximating the multichannel MMSE estimator is developed in this thesis. In comparison to the widely-used non-Bayesian algorithm, the developed Bayesian algorithm shows better performance in terms of mean-squared error and probability of exact support recovery. The algorithm is applied to direction-of-arrival estimation with sensor arrays and image denoising, and is shown to provide accurate results in these applications

    Hierarchical isometry properties of hierarchical measurements

    Get PDF
    Compressed sensing studies linear recovery problems under structure assumptions. We introduce a new class of measurement operators, coined hierarchical measurement operators, and prove results guaranteeing the efficient, stable and robust recovery of hierarchically structured signals from such measurements. We derive bounds on their hierarchical restricted isometry properties based on the restricted isometry constants of their constituent matrices, generalizing and extending prior work on Kronecker-product measurements. As an exemplary application, we apply the theory to two communication scenarios. The fast and scalable HiHTP algorithm is shown to be suitable for solving these types of problems and its performance is evaluated numerically in terms of sparse signal recovery and block detection capability

    Compressed Sensing in the Presence of Side Information

    Get PDF
    Reconstruction of continuous signals from a number of their discrete samples is central to digital signal processing. Digital devices can only process discrete data and thus processing the continuous signals requires discretization. After discretization, possibility of unique reconstruction of the source signals from their samples is crucial. The classical sampling theory provides bounds on the sampling rate for unique source reconstruction, known as the Nyquist sampling rate. Recently a new sampling scheme, Compressive Sensing (CS), has been formulated for sparse signals. CS is an active area of research in signal processing. It has revolutionized the classical sampling theorems and has provided a new scheme to sample and reconstruct sparse signals uniquely, below Nyquist sampling rates. A signal is called (approximately) sparse when a relatively large number of its elements are (approximately) equal to zero. For the class of sparse signals, sparsity can be viewed as prior information about the source signal. CS has found numerous applications and has improved some image acquisition devices. Interesting instances of CS can happen, when apart from sparsity, side information is available about the source signals. The side information can be about the source structure, distribution, etc. Such cases can be viewed as extensions of the classical CS. In such cases we are interested in incorporating the side information to either improve the quality of the source reconstruction or decrease the number of the required samples for accurate reconstruction. A general CS problem can be transformed to an equivalent optimization problem. In this thesis, a special case of CS with side information about the feasible region of the equivalent optimization problem is studied. It is shown that in such cases uniqueness and stability of the equivalent optimization problem still holds. Then, an efficient reconstruction method is proposed. To demonstrate the practical value of the proposed scheme, the algorithm is applied on two real world applications: image deblurring in optical imaging and surface reconstruction in the gradient field. Experimental results are provided to further investigate and confirm the effectiveness and usefulness of the proposed scheme

    The troublesome kernel: why deep learning for inverse problems is typically unstable

    Full text link
    There is overwhelming empirical evidence that Deep Learning (DL) leads to unstable methods in applications ranging from image classification and computer vision to voice recognition and automated diagnosis in medicine. Recently, a similar instability phenomenon has been discovered when DL is used to solve certain problems in computational science, namely, inverse problems in imaging. In this paper we present a comprehensive mathematical analysis explaining the many facets of the instability phenomenon in DL for inverse problems. Our main results not only explain why this phenomenon occurs, they also shed light as to why finding a cure for instabilities is so difficult in practice. Additionally, these theorems show that instabilities are typically not rare events - rather, they can occur even when the measurements are subject to completely random noise - and consequently how easy it can be to destablise certain trained neural networks. We also examine the delicate balance between reconstruction performance and stability, and in particular, how DL methods may outperform state-of-the-art sparse regularization methods, but at the cost of instability. Finally, we demonstrate a counterintuitive phenomenon: training a neural network may generically not yield an optimal reconstruction method for an inverse problem

    Breaking the Coherence Barrier: A New Theory for Compressed Sensing

    Get PDF
    This paper presents a framework for compressed sensing that bridges a gap between existing theory and the current use of compressed sensing in many real-world applications. In doing so, it also introduces a new sampling method that yields substantially improved recovery over existing techniques. In many applications of compressed sensing, including medical imaging, the standard principles of incoherence and sparsity are lacking. Whilst compressed sensing is often used successfully in such applications, it is done largely without mathematical explanation. The framework introduced in this paper provides such a justification. It does so by replacing these standard principles with three more general concepts: asymptotic sparsity, asymptotic incoherence and multilevel random subsampling. Moreover, not only does this work provide such a theoretical justification, it explains several key phenomena witnessed in practice. In particular, and unlike the standard theory, this work demonstrates the dependence of optimal sampling strategies on both the incoherence structure of the sampling operator and on the structure of the signal to be recovered. Another key consequence of this framework is the introduction of a new structured sampling method that exploits these phenomena to achieve significant improvements over current state-of-the-art techniques
    • …
    corecore