367 research outputs found

    One-bit compressive sensing with norm estimation

    Full text link
    Consider the recovery of an unknown signal x{x} from quantized linear measurements. In the one-bit compressive sensing setting, one typically assumes that x{x} is sparse, and that the measurements are of the form sign(ai,x){±1}\operatorname{sign}(\langle {a}_i, {x} \rangle) \in \{\pm1\}. Since such measurements give no information on the norm of x{x}, recovery methods from such measurements typically assume that x2=1\| {x} \|_2=1. We show that if one allows more generally for quantized affine measurements of the form sign(ai,x+bi)\operatorname{sign}(\langle {a}_i, {x} \rangle + b_i), and if the vectors ai{a}_i are random, an appropriate choice of the affine shifts bib_i allows norm recovery to be easily incorporated into existing methods for one-bit compressive sensing. Additionally, we show that for arbitrary fixed x{x} in the annulus rx2Rr \leq \| {x} \|_2 \leq R, one may estimate the norm x2\| {x} \|_2 up to additive error δ\delta from mR4r2δ2m \gtrsim R^4 r^{-2} \delta^{-2} such binary measurements through a single evaluation of the inverse Gaussian error function. Finally, all of our recovery guarantees can be made universal over sparse vectors, in the sense that with high probability, one set of measurements and thresholds can successfully estimate all sparse vectors x{x} within a Euclidean ball of known radius.Comment: 20 pages, 2 figure

    Quantization and Compressive Sensing

    Get PDF
    Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This book chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Specifically, we first provide a brief overview of quantization and examine fundamental performance bounds applicable to any quantization approach. Next, we consider several forms of scalar quantizers, namely uniform, non-uniform, and 1-bit. We provide performance bounds and fundamental analysis, as well as practical quantizer designs and reconstruction algorithms that account for quantization. Furthermore, we provide an overview of Sigma-Delta (ΣΔ\Sigma\Delta) quantization in the compressed sensing context, and also discuss implementation issues, recovery algorithms and performance bounds. As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing and Its Applications", 201

    Small Width, Low Distortions: Quantized Random Embeddings of Low-complexity Sets

    Full text link
    Under which conditions and with which distortions can we preserve the pairwise-distances of low-complexity vectors, e.g., for structured sets such as the set of sparse vectors or the one of low-rank matrices, when these are mapped in a finite set of vectors? This work addresses this general question through the specific use of a quantized and dithered random linear mapping which combines, in the following order, a sub-Gaussian random projection in RM\mathbb R^M of vectors in RN\mathbb R^N, a random translation, or "dither", of the projected vectors and a uniform scalar quantizer of resolution δ>0\delta>0 applied componentwise. Thanks to this quantized mapping we are first able to show that, with high probability, an embedding of a bounded set KRN\mathcal K \subset \mathbb R^N in δZM\delta \mathbb Z^M can be achieved when distances in the quantized and in the original domains are measured with the 1\ell_1- and 2\ell_2-norm, respectively, and provided the number of quantized observations MM is large before the square of the "Gaussian mean width" of K\mathcal K. In this case, we show that the embedding is actually "quasi-isometric" and only suffers of both multiplicative and additive distortions whose magnitudes decrease as M1/5M^{-1/5} for general sets, and as M1/2M^{-1/2} for structured set, when MM increases. Second, when one is only interested in characterizing the maximal distance separating two elements of K\mathcal K mapped to the same quantized vector, i.e., the "consistency width" of the mapping, we show that for a similar number of measurements and with high probability this width decays as M1/4M^{-1/4} for general sets and as 1/M1/M for structured ones when MM increases. Finally, as an important aspect of our work, we also establish how the non-Gaussianity of the mapping impacts the class of vectors that can be embedded or whose consistency width provably decays when MM increases.Comment: Keywords: quantization, restricted isometry property, compressed sensing, dimensionality reduction. 31 pages, 1 figur

    Feedback Acquisition and Reconstruction of Spectrum-Sparse Signals by Predictive Level Comparisons

    Full text link
    In this letter, we propose a sparsity promoting feedback acquisition and reconstruction scheme for sensing, encoding and subsequent reconstruction of spectrally sparse signals. In the proposed scheme, the spectral components are estimated utilizing a sparsity-promoting, sliding-window algorithm in a feedback loop. Utilizing the estimated spectral components, a level signal is predicted and sign measurements of the prediction error are acquired. The sparsity promoting algorithm can then estimate the spectral components iteratively from the sign measurements. Unlike many batch-based Compressive Sensing (CS) algorithms, our proposed algorithm gradually estimates and follows slow changes in the sparse components utilizing a sliding-window technique. We also consider the scenario in which possible flipping errors in the sign bits propagate along iterations (due to the feedback loop) during reconstruction. We propose an iterative error correction algorithm to cope with this error propagation phenomenon considering a binary-sparse occurrence model on the error sequence. Simulation results show effective performance of the proposed scheme in comparison with the literature
    corecore