19 research outputs found
Feedback Acquisition and Reconstruction of Spectrum-Sparse Signals by Predictive Level Comparisons
In this letter, we propose a sparsity promoting feedback acquisition and
reconstruction scheme for sensing, encoding and subsequent reconstruction of
spectrally sparse signals. In the proposed scheme, the spectral components are
estimated utilizing a sparsity-promoting, sliding-window algorithm in a
feedback loop. Utilizing the estimated spectral components, a level signal is
predicted and sign measurements of the prediction error are acquired. The
sparsity promoting algorithm can then estimate the spectral components
iteratively from the sign measurements. Unlike many batch-based Compressive
Sensing (CS) algorithms, our proposed algorithm gradually estimates and follows
slow changes in the sparse components utilizing a sliding-window technique. We
also consider the scenario in which possible flipping errors in the sign bits
propagate along iterations (due to the feedback loop) during reconstruction. We
propose an iterative error correction algorithm to cope with this error
propagation phenomenon considering a binary-sparse occurrence model on the
error sequence. Simulation results show effective performance of the proposed
scheme in comparison with the literature
One-bit compressive sensing with norm estimation
Consider the recovery of an unknown signal from quantized linear
measurements. In the one-bit compressive sensing setting, one typically assumes
that is sparse, and that the measurements are of the form
. Since such
measurements give no information on the norm of , recovery methods from
such measurements typically assume that . We show that if one
allows more generally for quantized affine measurements of the form
, and if the vectors
are random, an appropriate choice of the affine shifts allows
norm recovery to be easily incorporated into existing methods for one-bit
compressive sensing. Additionally, we show that for arbitrary fixed in
the annulus , one may estimate the norm up to additive error from
such binary measurements through a single evaluation of the inverse Gaussian
error function. Finally, all of our recovery guarantees can be made universal
over sparse vectors, in the sense that with high probability, one set of
measurements and thresholds can successfully estimate all sparse vectors
within a Euclidean ball of known radius.Comment: 20 pages, 2 figure
Binary Linear Classification and Feature Selection via Generalized Approximate Message Passing
For the problem of binary linear classification and feature selection, we
propose algorithmic approaches to classifier design based on the generalized
approximate message passing (GAMP) algorithm, recently proposed in the context
of compressive sensing. We are particularly motivated by problems where the
number of features greatly exceeds the number of training examples, but where
only a few features suffice for accurate classification. We show that
sum-product GAMP can be used to (approximately) minimize the classification
error rate and max-sum GAMP can be used to minimize a wide variety of
regularized loss functions. Furthermore, we describe an
expectation-maximization (EM)-based scheme to learn the associated model
parameters online, as an alternative to cross-validation, and we show that
GAMP's state-evolution framework can be used to accurately predict the
misclassification rate. Finally, we present a detailed numerical study to
confirm the accuracy, speed, and flexibility afforded by our GAMP-based
approaches to binary linear classification and feature selection