During the last decades, information is being gathered and processed at an explosive rate. This fact gives rise to a very important issue, that is, how to effectively and precisely describe the information content of a given source signal or an ensemble of source signals, such that it can be stored, processed or transmitted by taking into consideration the limitations and capabilities of the several digital devices. One of the fundamental principles of signal processing for decades is the Nyquist-Shannon sampling theorem, which states that the minimum number of samples needed to reconstruct a signal without error is dictated by its bandwidth. However, there are many cases in our everyday life in which sampling at the Nyquist rate results in too many data and thus, demanding an increased processing power, as well as storage requirements. A mathematical theory that emerged recently presents the background for developing a novel sensing/sampling paradigm that goes against the common tenet in data acquisition. Compressed sensing (CS), also known as compressive sensing, compressive sampling and sparse sampling, is a technique for acquiring and reconstructing a signal utilizing the prior knowledge that it is sparse or compressible, which provides a stricter sampling condition yielding a sub-Nyquist sampling criterion. Sparsity expresses the fact that the “information rate” of a continuous-time signal may be much smaller than suggested by its bandwidth via the Nyquist’s theorem, or that a discrete-time signal depends on a number of degrees of freedom which is comparably much smaller than its (finite) length. Several deterministic and probabilistic approaches have been proposed over the last years confronting the problem of sparse signal reconstruction from distinct viewpoints. The majority of these methods relies on solving constrained-based optimization problems by employing several vector norms in the design of appropriate objective functions. Only recently the problem of CS reconstruction has been studied in a probabilistic (Bayesian) framework, resulting in several advantages when compared with the norm-based techniques. However, both of these classes of algorithms are based on a Gaussian assumption for the characterization of the statistics of the sparse signal. This thesis introduces the class of heavy-tailed distributions and particularly the family of alpha-Stable distributions as a suitable modeling tool for designing efficient CS reconstruction algorithms exploiting the sparsity of the received signal in an appropriate transform domain. More specifically, the first of the proposed methods exploits the prior knowledge for a sparse coefficient vector by modeling the statistics of its components using a Gaussian Scale Mixture (GSM). Then, the reconstruction of the sparse signal is reduced to the problem of estimating the parameters of the GSM model, which is in turn carried out by developing a Bayesian technique. Furthermore, there are applications, for instance in the case of a sensor network, where the acquisition process results in a set of multiple observations of the unknown sparse signal. For this purpose, we extend the previous method in order to take into account the fact that the set of multiple observations is characterized by a common sparsity structure with high probability, yielding an efficient CS method amenable to a distributed implementation.
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.