22,567 research outputs found
A typical reconstruction limit of compressed sensing based on Lp-norm minimization
We consider the problem of reconstructing an -dimensional continuous
vector \bx from constraints which are generated by its linear
transformation under the assumption that the number of non-zero elements of
\bx is typically limited to (). Problems of this
type can be solved by minimizing a cost function with respect to the -norm
||\bx||_p=\lim_{\epsilon \to +0}\sum_{i=1}^N |x_i|^{p+\epsilon}, subject to
the constraints under an appropriate condition. For several , we assess a
typical case limit , which represents a critical relation
between and for successfully reconstructing the original
vector by minimization for typical situations in the limit
with keeping finite, utilizing the replica method. For ,
is considerably smaller than its worst case counterpart, which
has been rigorously derived by existing literature of information theory.Comment: 12 pages, 2 figure
Sparsity Order Estimation from a Single Compressed Observation Vector
We investigate the problem of estimating the unknown degree of sparsity from
compressive measurements without the need to carry out a sparse recovery step.
While the sparsity order can be directly inferred from the effective rank of
the observation matrix in the multiple snapshot case, this appears to be
impossible in the more challenging single snapshot case. We show that specially
designed measurement matrices allow to rearrange the measurement vector into a
matrix such that its effective rank coincides with the effective sparsity
order. In fact, we prove that matrices which are composed of a Khatri-Rao
product of smaller matrices generate measurements that allow to infer the
sparsity order. Moreover, if some samples are used more than once, one of the
matrices needs to be Vandermonde. These structural constraints reduce the
degrees of freedom in choosing the measurement matrix which may incur in a
degradation in the achievable coherence. We thus also address suitable choices
of the measurement matrices. In particular, we analyze Khatri-Rao and
Vandermonde matrices in terms of their coherence and provide a new design for
Vandermonde matrices that achieves a low coherence
Statistical Compressed Sensing of Gaussian Mixture Models
A novel framework of compressed sensing, namely statistical compressed
sensing (SCS), that aims at efficiently sampling a collection of signals that
follow a statistical distribution, and achieving accurate reconstruction on
average, is introduced. SCS based on Gaussian models is investigated in depth.
For signals that follow a single Gaussian model, with Gaussian or Bernoulli
sensing matrices of O(k) measurements, considerably smaller than the O(k
log(N/k)) required by conventional CS based on sparse models, where N is the
signal dimension, and with an optimal decoder implemented via linear filtering,
significantly faster than the pursuit decoders applied in conventional CS, the
error of SCS is shown tightly upper bounded by a constant times the best k-term
approximation error, with overwhelming probability. The failure probability is
also significantly smaller than that of conventional sparsity-oriented CS.
Stronger yet simpler results further show that for any sensing matrix, the
error of Gaussian SCS is upper bounded by a constant times the best k-term
approximation with probability one, and the bound constant can be efficiently
calculated. For Gaussian mixture models (GMMs), that assume multiple Gaussian
distributions and that each signal follows one of them with an unknown index, a
piecewise linear estimator is introduced to decode SCS. The accuracy of model
selection, at the heart of the piecewise linear decoder, is analyzed in terms
of the properties of the Gaussian distributions and the number of sensing
measurements. A maximum a posteriori expectation-maximization algorithm that
iteratively estimates the Gaussian models parameters, the signals model
selection, and decodes the signals, is presented for GMM-based SCS. In real
image sensing applications, GMM-based SCS is shown to lead to improved results
compared to conventional CS, at a considerably lower computational cost
On the SNR Variability in Noisy Compressed Sensing
Compressed sensing (CS) is a sampling paradigm that allows to simultaneously
measure and compress signals that are sparse or compressible in some domain.
The choice of a sensing matrix that carries out the measurement has a defining
impact on the system performance and it is often advocated to draw its elements
randomly. It has been noted that in the presence of input (signal) noise, the
application of the sensing matrix causes SNR degradation due to the noise
folding effect. In fact, it might also result in the variations of the output
SNR in compressive measurements over the support of the input signal,
potentially resulting in unexpected non-uniform system performance. In this
work, we study the impact of a distribution from which the elements of a
sensing matrix are drawn on the spread of the output SNR. We derive analytic
expressions for several common types of sensing matrices and show that the SNR
spread grows with the decrease of the number of measurements. This makes its
negative effect especially pronounced for high compression rates that are often
of interest in CS.Comment: 4 pages + reference
- …