1,216 research outputs found

    Hamming Compressed Sensing

    Full text link
    Compressed sensing (CS) and 1-bit CS cannot directly recover quantized signals and require time consuming recovery. In this paper, we introduce \textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit quantized signal of dimensional nn from its 1-bit measurements via invoking nn times of Kullback-Leibler divergence based nearest neighbor search. Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes considerably less (linear) recovery time and requires substantially less measurements (O(logn)\mathcal O(\log n)). Moreover, HCS recovery can accelerate the subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of HCS for general signals and "HCS+dequantizer" recovery error bound for sparse signals. Extensive numerical simulations verify the appealing accuracy, robustness, efficiency and consistency of HCS.Comment: 33 pages, 8 figure

    Dictionary Learning for Blind One Bit Compressed Sensing

    Full text link
    This letter proposes a dictionary learning algorithm for blind one bit compressed sensing. In the blind one bit compressed sensing framework, the original signal to be reconstructed from one bit linear random measurements is sparse in an unknown domain. In this context, the multiplication of measurement matrix \Ab and sparse domain matrix Φ\Phi, \ie \Db=\Ab\Phi, should be learned. Hence, we use dictionary learning to train this matrix. Towards that end, an appropriate continuous convex cost function is suggested for one bit compressed sensing and a simple steepest-descent method is exploited to learn the rows of the matrix \Db. Experimental results show the effectiveness of the proposed algorithm against the case of no dictionary learning, specially with increasing the number of training signals and the number of sign measurements.Comment: 5 pages, 3 figure

    Compressive Sensing Using Iterative Hard Thresholding with Low Precision Data Representation: Theory and Applications

    Full text link
    Modern scientific instruments produce vast amounts of data, which can overwhelm the processing ability of computer systems. Lossy compression of data is an intriguing solution, but comes with its own drawbacks, such as potential signal loss, and the need for careful optimization of the compression ratio. In this work, we focus on a setting where this problem is especially acute: compressive sensing frameworks for interferometry and medical imaging. We ask the following question: can the precision of the data representation be lowered for all inputs, with recovery guarantees and practical performance? Our first contribution is a theoretical analysis of the normalized Iterative Hard Thresholding (IHT) algorithm when all input data, meaning both the measurement matrix and the observation vector are quantized aggressively. We present a variant of low precision normalized {IHT} that, under mild conditions, can still provide recovery guarantees. The second contribution is the application of our quantization framework to radio astronomy and magnetic resonance imaging. We show that lowering the precision of the data can significantly accelerate image recovery. We evaluate our approach on telescope data and samples of brain images using CPU and FPGA implementations achieving up to a 9x speed-up with negligible loss of recovery quality.Comment: 19 pages, 5 figures, 1 table, in IEEE Transactions on Signal Processin
    corecore