2,037 research outputs found

    Hamming Compressed Sensing

    Full text link
    Compressed sensing (CS) and 1-bit CS cannot directly recover quantized signals and require time consuming recovery. In this paper, we introduce \textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit quantized signal of dimensional nn from its 1-bit measurements via invoking nn times of Kullback-Leibler divergence based nearest neighbor search. Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes considerably less (linear) recovery time and requires substantially less measurements (O(logn)\mathcal O(\log n)). Moreover, HCS recovery can accelerate the subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of HCS for general signals and "HCS+dequantizer" recovery error bound for sparse signals. Extensive numerical simulations verify the appealing accuracy, robustness, efficiency and consistency of HCS.Comment: 33 pages, 8 figure

    Quantization and Compressive Sensing

    Get PDF
    Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This book chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Specifically, we first provide a brief overview of quantization and examine fundamental performance bounds applicable to any quantization approach. Next, we consider several forms of scalar quantizers, namely uniform, non-uniform, and 1-bit. We provide performance bounds and fundamental analysis, as well as practical quantizer designs and reconstruction algorithms that account for quantization. Furthermore, we provide an overview of Sigma-Delta (ΣΔ\Sigma\Delta) quantization in the compressed sensing context, and also discuss implementation issues, recovery algorithms and performance bounds. As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing and Its Applications", 201

    Variational Bayesian algorithm for quantized compressed sensing

    Full text link
    Compressed sensing (CS) is on recovery of high dimensional signals from their low dimensional linear measurements under a sparsity prior and digital quantization of the measurement data is inevitable in practical implementation of CS algorithms. In the existing literature, the quantization error is modeled typically as additive noise and the multi-bit and 1-bit quantized CS problems are dealt with separately using different treatments and procedures. In this paper, a novel variational Bayesian inference based CS algorithm is presented, which unifies the multi- and 1-bit CS processing and is applicable to various cases of noiseless/noisy environment and unsaturated/saturated quantizer. By decoupling the quantization error from the measurement noise, the quantization error is modeled as a random variable and estimated jointly with the signal being recovered. Such a novel characterization of the quantization error results in superior performance of the algorithm which is demonstrated by extensive simulations in comparison with state-of-the-art methods for both multi-bit and 1-bit CS problems.Comment: Accepted by IEEE Trans. Signal Processing. 10 pages, 6 figure

    One-bit compressive sensing with norm estimation

    Full text link
    Consider the recovery of an unknown signal x{x} from quantized linear measurements. In the one-bit compressive sensing setting, one typically assumes that x{x} is sparse, and that the measurements are of the form sign(ai,x){±1}\operatorname{sign}(\langle {a}_i, {x} \rangle) \in \{\pm1\}. Since such measurements give no information on the norm of x{x}, recovery methods from such measurements typically assume that x2=1\| {x} \|_2=1. We show that if one allows more generally for quantized affine measurements of the form sign(ai,x+bi)\operatorname{sign}(\langle {a}_i, {x} \rangle + b_i), and if the vectors ai{a}_i are random, an appropriate choice of the affine shifts bib_i allows norm recovery to be easily incorporated into existing methods for one-bit compressive sensing. Additionally, we show that for arbitrary fixed x{x} in the annulus rx2Rr \leq \| {x} \|_2 \leq R, one may estimate the norm x2\| {x} \|_2 up to additive error δ\delta from mR4r2δ2m \gtrsim R^4 r^{-2} \delta^{-2} such binary measurements through a single evaluation of the inverse Gaussian error function. Finally, all of our recovery guarantees can be made universal over sparse vectors, in the sense that with high probability, one set of measurements and thresholds can successfully estimate all sparse vectors x{x} within a Euclidean ball of known radius.Comment: 20 pages, 2 figure

    Dictionary Learning for Blind One Bit Compressed Sensing

    Full text link
    This letter proposes a dictionary learning algorithm for blind one bit compressed sensing. In the blind one bit compressed sensing framework, the original signal to be reconstructed from one bit linear random measurements is sparse in an unknown domain. In this context, the multiplication of measurement matrix \Ab and sparse domain matrix Φ\Phi, \ie \Db=\Ab\Phi, should be learned. Hence, we use dictionary learning to train this matrix. Towards that end, an appropriate continuous convex cost function is suggested for one bit compressed sensing and a simple steepest-descent method is exploited to learn the rows of the matrix \Db. Experimental results show the effectiveness of the proposed algorithm against the case of no dictionary learning, specially with increasing the number of training signals and the number of sign measurements.Comment: 5 pages, 3 figure
    corecore