2,009 research outputs found
Variational Bayesian algorithm for quantized compressed sensing
Compressed sensing (CS) is on recovery of high dimensional signals from their
low dimensional linear measurements under a sparsity prior and digital
quantization of the measurement data is inevitable in practical implementation
of CS algorithms. In the existing literature, the quantization error is modeled
typically as additive noise and the multi-bit and 1-bit quantized CS problems
are dealt with separately using different treatments and procedures. In this
paper, a novel variational Bayesian inference based CS algorithm is presented,
which unifies the multi- and 1-bit CS processing and is applicable to various
cases of noiseless/noisy environment and unsaturated/saturated quantizer. By
decoupling the quantization error from the measurement noise, the quantization
error is modeled as a random variable and estimated jointly with the signal
being recovered. Such a novel characterization of the quantization error
results in superior performance of the algorithm which is demonstrated by
extensive simulations in comparison with state-of-the-art methods for both
multi-bit and 1-bit CS problems.Comment: Accepted by IEEE Trans. Signal Processing. 10 pages, 6 figure
Approximate Message Passing-based Compressed Sensing Reconstruction with Generalized Elastic Net Prior
In this paper, we study the compressed sensing reconstruction problem with generalized elastic net prior (GENP), where a sparse signal is sampled via a noisy underdetermined linear observation system, and an additional initial estimation of the signal (the GENP) is available during the reconstruction. We first incorporate the GENP into the LASSO and the approximate message passing (AMP) frameworks, denoted by GENP-LASSO and GENP-AMP respectively. We then focus on GENP-AMP and investigate its parameter selection, state evolution, and noise-sensitivity analysis. A practical parameterless version of the GENP-AMP is also developed, which does not need to know the sparsity of the unknown signal and the variance of the GENP. Simulation results with 1-D data and two different imaging applications are presented to demonstrate the efficiency of the proposed schemes
Message-Passing Estimation from Quantized Samples
Estimation of a vector from quantized linear measurements is a common problem
for which simple linear techniques are suboptimal -- sometimes greatly so. This
paper develops generalized approximate message passing (GAMP) algorithms for
minimum mean-squared error estimation of a random vector from quantized linear
measurements, notably allowing the linear expansion to be overcomplete or
undercomplete and the scalar quantization to be regular or non-regular. GAMP is
a recently-developed class of algorithms that uses Gaussian approximations in
belief propagation and allows arbitrary separable input and output channels.
Scalar quantization of measurements is incorporated into the output channel
formalism, leading to the first tractable and effective method for
high-dimensional estimation problems involving non-regular scalar quantization.
Non-regular quantization is empirically demonstrated to greatly improve
rate-distortion performance in some problems with oversampling or with
undersampling combined with a sparsity-inducing prior. Under the assumption of
a Gaussian measurement matrix with i.i.d. entries, the asymptotic error
performance of GAMP can be accurately predicted and tracked through the state
evolution formalism. We additionally use state evolution to design MSE-optimal
scalar quantizers for GAMP signal reconstruction and empirically demonstrate
the superior error performance of the resulting quantizers.Comment: 12 pages, 8 figure
- …