19 research outputs found

    Feedback Acquisition and Reconstruction of Spectrum-Sparse Signals by Predictive Level Comparisons

    Full text link
    In this letter, we propose a sparsity promoting feedback acquisition and reconstruction scheme for sensing, encoding and subsequent reconstruction of spectrally sparse signals. In the proposed scheme, the spectral components are estimated utilizing a sparsity-promoting, sliding-window algorithm in a feedback loop. Utilizing the estimated spectral components, a level signal is predicted and sign measurements of the prediction error are acquired. The sparsity promoting algorithm can then estimate the spectral components iteratively from the sign measurements. Unlike many batch-based Compressive Sensing (CS) algorithms, our proposed algorithm gradually estimates and follows slow changes in the sparse components utilizing a sliding-window technique. We also consider the scenario in which possible flipping errors in the sign bits propagate along iterations (due to the feedback loop) during reconstruction. We propose an iterative error correction algorithm to cope with this error propagation phenomenon considering a binary-sparse occurrence model on the error sequence. Simulation results show effective performance of the proposed scheme in comparison with the literature

    One-bit compressive sensing with norm estimation

    Full text link
    Consider the recovery of an unknown signal x{x} from quantized linear measurements. In the one-bit compressive sensing setting, one typically assumes that x{x} is sparse, and that the measurements are of the form sign⁑(⟨ai,x⟩)∈{Β±1}\operatorname{sign}(\langle {a}_i, {x} \rangle) \in \{\pm1\}. Since such measurements give no information on the norm of x{x}, recovery methods from such measurements typically assume that βˆ₯xβˆ₯2=1\| {x} \|_2=1. We show that if one allows more generally for quantized affine measurements of the form sign⁑(⟨ai,x⟩+bi)\operatorname{sign}(\langle {a}_i, {x} \rangle + b_i), and if the vectors ai{a}_i are random, an appropriate choice of the affine shifts bib_i allows norm recovery to be easily incorporated into existing methods for one-bit compressive sensing. Additionally, we show that for arbitrary fixed x{x} in the annulus r≀βˆ₯xβˆ₯2≀Rr \leq \| {x} \|_2 \leq R, one may estimate the norm βˆ₯xβˆ₯2\| {x} \|_2 up to additive error Ξ΄\delta from m≳R4rβˆ’2Ξ΄βˆ’2m \gtrsim R^4 r^{-2} \delta^{-2} such binary measurements through a single evaluation of the inverse Gaussian error function. Finally, all of our recovery guarantees can be made universal over sparse vectors, in the sense that with high probability, one set of measurements and thresholds can successfully estimate all sparse vectors x{x} within a Euclidean ball of known radius.Comment: 20 pages, 2 figure

    Binary Linear Classification and Feature Selection via Generalized Approximate Message Passing

    Full text link
    For the problem of binary linear classification and feature selection, we propose algorithmic approaches to classifier design based on the generalized approximate message passing (GAMP) algorithm, recently proposed in the context of compressive sensing. We are particularly motivated by problems where the number of features greatly exceeds the number of training examples, but where only a few features suffice for accurate classification. We show that sum-product GAMP can be used to (approximately) minimize the classification error rate and max-sum GAMP can be used to minimize a wide variety of regularized loss functions. Furthermore, we describe an expectation-maximization (EM)-based scheme to learn the associated model parameters online, as an alternative to cross-validation, and we show that GAMP's state-evolution framework can be used to accurately predict the misclassification rate. Finally, we present a detailed numerical study to confirm the accuracy, speed, and flexibility afforded by our GAMP-based approaches to binary linear classification and feature selection
    corecore