35 research outputs found

    Channel-Optimized Vector Quantizer Design for Compressed Sensing Measurements

    Full text link
    We consider vector-quantized (VQ) transmission of compressed sensing (CS) measurements over noisy channels. Adopting mean-square error (MSE) criterion to measure the distortion between a sparse vector and its reconstruction, we derive channel-optimized quantization principles for encoding CS measurement vector and reconstructing sparse source vector. The resulting necessary optimal conditions are used to develop an algorithm for training channel-optimized vector quantization (COVQ) of CS measurements by taking the end-to-end distortion measure into account.Comment: Published in ICASSP 201

    Analysis-by-Synthesis-based Quantization of Compressed Sensing Measurements

    Full text link
    We consider a resource-constrained scenario where a compressed sensing- (CS) based sensor has a low number of measurements which are quantized at a low rate followed by transmission or storage. Applying this scenario, we develop a new quantizer design which aims to attain a high-quality reconstruction performance of a sparse source signal based on analysis-by-synthesis framework. Through simulations, we compare the performance of the proposed quantization algorithm vis-a-vis existing quantization methods.Comment: 5 pages, Published in ICASSP 201

    Consistent Basis Pursuit for Signal and Matrix Estimates in Quantized Compressed Sensing

    Get PDF
    This paper focuses on the estimation of low-complexity signals when they are observed through MM uniformly quantized compressive observations. Among such signals, we consider 1-D sparse vectors, low-rank matrices, or compressible signals that are well approximated by one of these two models. In this context, we prove the estimation efficiency of a variant of Basis Pursuit Denoise, called Consistent Basis Pursuit (CoBP), enforcing consistency between the observations and the re-observed estimate, while promoting its low-complexity nature. We show that the reconstruction error of CoBP decays like M−1/4M^{-1/4} when all parameters but MM are fixed. Our proof is connected to recent bounds on the proximity of vectors or matrices when (i) those belong to a set of small intrinsic "dimension", as measured by the Gaussian mean width, and (ii) they share the same quantized (dithered) random projections. By solving CoBP with a proximal algorithm, we provide some extensive numerical observations that confirm the theoretical bound as MM is increased, displaying even faster error decay than predicted. The same phenomenon is observed in the special, yet important case of 1-bit CS.Comment: Keywords: Quantized compressed sensing, quantization, consistency, error decay, low-rank, sparsity. 10 pages, 3 figures. Note abbout this version: title change, typo corrections, clarification of the context, adding a comparison with BPD

    Distributed Quantization for Compressed Sensing

    Full text link
    We study distributed coding of compressed sensing (CS) measurements using vector quantizer (VQ). We develop a distributed framework for realizing optimized quantizer that enables encoding CS measurements of correlated sparse sources followed by joint decoding at a fusion center. The optimality of VQ encoder-decoder pairs is addressed by minimizing the sum of mean-square errors between the sparse sources and their reconstruction vectors at the fusion center. We derive a lower-bound on the end-to-end performance of the studied distributed system, and propose a practical encoder-decoder design through an iterative algorithm.Comment: 5 Pages, Accepted for presentation in ICASSP 201

    Stable Recovery Of Sparse Vectors From Random Sinusoidal Feature Maps

    Full text link
    Random sinusoidal features are a popular approach for speeding up kernel-based inference in large datasets. Prior to the inference stage, the approach suggests performing dimensionality reduction by first multiplying each data vector by a random Gaussian matrix, and then computing an element-wise sinusoid. Theoretical analysis shows that collecting a sufficient number of such features can be reliably used for subsequent inference in kernel classification and regression. In this work, we demonstrate that with a mild increase in the dimension of the embedding, it is also possible to reconstruct the data vector from such random sinusoidal features, provided that the underlying data is sparse enough. In particular, we propose a numerically stable algorithm for reconstructing the data vector given the nonlinear features, and analyze its sample complexity. Our algorithm can be extended to other types of structured inverse problems, such as demixing a pair of sparse (but incoherent) vectors. We support the efficacy of our approach via numerical experiments
    corecore