17,479 research outputs found
Info-Greedy sequential adaptive compressed sensing
We present an information-theoretic framework for sequential adaptive
compressed sensing, Info-Greedy Sensing, where measurements are chosen to
maximize the extracted information conditioned on the previous measurements. We
show that the widely used bisection approach is Info-Greedy for a family of
-sparse signals by connecting compressed sensing and blackbox complexity of
sequential query algorithms, and present Info-Greedy algorithms for Gaussian
and Gaussian Mixture Model (GMM) signals, as well as ways to design sparse
Info-Greedy measurements. Numerical examples demonstrate the good performance
of the proposed algorithms using simulated and real data: Info-Greedy Sensing
shows significant improvement over random projection for signals with sparse
and low-rank covariance matrices, and adaptivity brings robustness when there
is a mismatch between the assumed and the true distributions.Comment: Preliminary results presented at Allerton Conference 2014. To appear
in IEEE Journal Selected Topics on Signal Processin
Sequential Sensing with Model Mismatch
We characterize the performance of sequential information guided sensing,
Info-Greedy Sensing, when there is a mismatch between the true signal model and
the assumed model, which may be a sample estimate. In particular, we consider a
setup where the signal is low-rank Gaussian and the measurements are taken in
the directions of eigenvectors of the covariance matrix in a decreasing order
of eigenvalues. We establish a set of performance bounds when a mismatched
covariance matrix is used, in terms of the gap of signal posterior entropy, as
well as the additional amount of power required to achieve the same signal
recovery precision. Based on this, we further study how to choose an
initialization for Info-Greedy Sensing using the sample covariance matrix, or
using an efficient covariance sketching scheme.Comment: Submitted to IEEE for publicatio
Discrimination on the Grassmann Manifold: Fundamental Limits of Subspace Classifiers
We present fundamental limits on the reliable classification of linear and
affine subspaces from noisy, linear features. Drawing an analogy between
discrimination among subspaces and communication over vector wireless channels,
we propose two Shannon-inspired measures to characterize asymptotic classifier
performance. First, we define the classification capacity, which characterizes
necessary and sufficient conditions for the misclassification probability to
vanish as the signal dimension, the number of features, and the number of
subspaces to be discerned all approach infinity. Second, we define the
diversity-discrimination tradeoff which, by analogy with the
diversity-multiplexing tradeoff of fading vector channels, characterizes
relationships between the number of discernible subspaces and the
misclassification probability as the noise power approaches zero. We derive
upper and lower bounds on these measures which are tight in many regimes.
Numerical results, including a face recognition application, validate the
results in practice.Comment: 19 pages, 4 figures. Revised submission to IEEE Transactions on
Information Theor
- …