11 research outputs found

    Lower Bound on the Mean-Squared Error in Oversampled Quantization of Periodic Signals Using Vector Quantization Analysis

    Get PDF
    Oversampled analog-to-digital conversion is a technique which permits high conversion resolution using coarse quantization. Classically, by lowpass filtering the quantized oversampled signal, it is possible to reduce the quantization error power in proportion to the oversampling ratio R. In other words, the reconstruction mean-squared error (MSE) is i

    LMMSE Estimation and Interpolation of Continuous-Time Signals from Discrete-Time Samples Using Factor Graphs

    Full text link
    The factor graph approach to discrete-time linear Gaussian state space models is well developed. The paper extends this approach to continuous-time linear systems/filters that are driven by white Gaussian noise. By Gaussian message passing, we then obtain MAP/MMSE/LMMSE estimates of the input signal, or of the state, or of the output signal from noisy observations of the output signal. These estimates may be obtained with arbitrary temporal resolution. The proposed input signal estimation does not seem to have appeared in the prior Kalman filtering literature

    Sampling and Reconstruction of Spatial Fields using Mobile Sensors

    Get PDF
    Spatial sampling is traditionally studied in a static setting where static sensors scattered around space take measurements of the spatial field at their locations. In this paper we study the emerging paradigm of sampling and reconstructing spatial fields using sensors that move through space. We show that mobile sensing offers some unique advantages over static sensing in sensing time-invariant bandlimited spatial fields. Since a moving sensor encounters such a spatial field along its path as a time-domain signal, a time-domain anti-aliasing filter can be employed prior to sampling the signal received at the sensor. Such a filtering procedure, when used by a configuration of sensors moving at constant speeds along equispaced parallel lines, leads to a complete suppression of spatial aliasing in the direction of motion of the sensors. We analytically quantify the advantage of using such a sampling scheme over a static sampling scheme by computing the reduction in sampling noise due to the filter. We also analyze the effects of non-uniform sensor speeds on the reconstruction accuracy. Using simulation examples we demonstrate the advantages of mobile sampling over static sampling in practical problems. We extend our analysis to sampling and reconstruction schemes for monitoring time-varying bandlimited fields using mobile sensors. We demonstrate that in some situations we require a lower density of sensors when using a mobile sensing scheme instead of the conventional static sensing scheme. The exact advantage is quantified for a problem of sampling and reconstructing an audio field.Comment: Submitted to IEEE Transactions on Signal Processing May 2012; revised Oct 201

    Message-Passing Estimation from Quantized Samples

    Full text link
    Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimal -- sometimes greatly so. This paper develops generalized approximate message passing (GAMP) algorithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowing the linear expansion to be overcomplete or undercomplete and the scalar quantization to be regular or non-regular. GAMP is a recently-developed class of algorithms that uses Gaussian approximations in belief propagation and allows arbitrary separable input and output channels. Scalar quantization of measurements is incorporated into the output channel formalism, leading to the first tractable and effective method for high-dimensional estimation problems involving non-regular scalar quantization. Non-regular quantization is empirically demonstrated to greatly improve rate-distortion performance in some problems with oversampling or with undersampling combined with a sparsity-inducing prior. Under the assumption of a Gaussian measurement matrix with i.i.d. entries, the asymptotic error performance of GAMP can be accurately predicted and tracked through the state evolution formalism. We additionally use state evolution to design MSE-optimal scalar quantizers for GAMP signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers.Comment: 12 pages, 8 figure

    Frame Permutation Quantization

    Full text link
    Frame permutation quantization (FPQ) is a new vector quantization technique using finite frames. In FPQ, a vector is encoded using a permutation source code to quantize its frame expansion. This means that the encoding is a partial ordering of the frame expansion coefficients. Compared to ordinary permutation source coding, FPQ produces a greater number of possible quantization rates and a higher maximum rate. Various representations for the partitions induced by FPQ are presented, and reconstruction algorithms based on linear programming, quadratic programming, and recursive orthogonal projection are derived. Implementations of the linear and quadratic programming algorithms for uniform and Gaussian sources show performance improvements over entropy-constrained scalar quantization for certain combinations of vector dimension and coding rate. Monte Carlo evaluation of the recursive algorithm shows that mean-squared error (MSE) decays as 1/M^4 for an M-element frame, which is consistent with previous results on optimal decay of MSE. Reconstruction using the canonical dual frame is also studied, and several results relate properties of the analysis frame to whether linear reconstruction techniques provide consistent reconstructions.Comment: 29 pages, 5 figures; detailed added to proof of Theorem 4.3 and a few minor correction

    Small Width, Low Distortions: Quantized Random Embeddings of Low-complexity Sets

    Full text link
    Under which conditions and with which distortions can we preserve the pairwise-distances of low-complexity vectors, e.g., for structured sets such as the set of sparse vectors or the one of low-rank matrices, when these are mapped in a finite set of vectors? This work addresses this general question through the specific use of a quantized and dithered random linear mapping which combines, in the following order, a sub-Gaussian random projection in RM\mathbb R^M of vectors in RN\mathbb R^N, a random translation, or "dither", of the projected vectors and a uniform scalar quantizer of resolution δ>0\delta>0 applied componentwise. Thanks to this quantized mapping we are first able to show that, with high probability, an embedding of a bounded set KRN\mathcal K \subset \mathbb R^N in δZM\delta \mathbb Z^M can be achieved when distances in the quantized and in the original domains are measured with the 1\ell_1- and 2\ell_2-norm, respectively, and provided the number of quantized observations MM is large before the square of the "Gaussian mean width" of K\mathcal K. In this case, we show that the embedding is actually "quasi-isometric" and only suffers of both multiplicative and additive distortions whose magnitudes decrease as M1/5M^{-1/5} for general sets, and as M1/2M^{-1/2} for structured set, when MM increases. Second, when one is only interested in characterizing the maximal distance separating two elements of K\mathcal K mapped to the same quantized vector, i.e., the "consistency width" of the mapping, we show that for a similar number of measurements and with high probability this width decays as M1/4M^{-1/4} for general sets and as 1/M1/M for structured ones when MM increases. Finally, as an important aspect of our work, we also establish how the non-Gaussianity of the mapping impacts the class of vectors that can be embedded or whose consistency width provably decays when MM increases.Comment: Keywords: quantization, restricted isometry property, compressed sensing, dimensionality reduction. 31 pages, 1 figur

    Lower bound on the mean-squared error in oversampled quantization of periodic signals using vector quantization analysis

    No full text

    Lower bound on the mean-squared error in oversampled quantization of periodic signals using vector quantization analysis

    No full text
    Oversampled analog-to-digital conversion technique which permits high conversion resolution using coarse quantization, Classically, by lowpass filtering the quantized oversampled signal, it is possible to reduce the quantization error power in proportion to the oversampling ratio R. In other words, the reconstruction mean-squared error (MSE) is in O (R(-1)). It was recently found that this error reduction is not optimal, Under certain conditions, it was shown on periodic bandlimited signals that an upper hound on the MSE of optimal reconstruction is in O (R(-2)) instead of O (R(-1)). In the present paper, we prove on the same type of signals that the order O (R(-2)) is the theoretical limit of reconstruction as an MSE lower bound, The proof is based on a vector-quantization approach with an analysis of partition cell density

    Quantized Overcomplete Expansions in R^N: Analysis, Synthesis, and Algorithms

    Get PDF
    Coefficient quantization has peculiar qualitative effects on representations of vectors in IR with respect to overcomplete sets of vectors. These effects are investigated in two settings: frame expansions (representations obtained by forming inner products with each element of the set) and matching pursuit expansions (approximations obtained by greedily forming linear combinations). In both cases, based on the concept of consistency, it is shown that traditional linear reconstruction methods are suboptimal, and better consistent reconstruction algorithms are given. The proposed consistent reconstruction algorithms were in each case implemented, and experimental results are included. For frame expansions, results are proven to bound distortion as a function of frame redundancy r and quantization step size for linear, consistent, and optimal reconstruction methods. Taken together, these suggest that optimal reconstruction methods will yield O(1=r ) mean-squared error (MSE), and that consistency is sufficient to insure this asymptotic behavior. A result on the asymptotic tightness of random frames is also proven. Applicability of quantized matching pursuit to lossy vector compression is explored. Experiments demonstrate the likelihood that a linear reconstruction is inconsistent, the MSE reduction obtained with a nonlinear (consistent) reconstruction algorithm, and generally competitive performance at low bit rates

    Quantization and Compressive Sensing

    Get PDF
    Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This book chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Specifically, we first provide a brief overview of quantization and examine fundamental performance bounds applicable to any quantization approach. Next, we consider several forms of scalar quantizers, namely uniform, non-uniform, and 1-bit. We provide performance bounds and fundamental analysis, as well as practical quantizer designs and reconstruction algorithms that account for quantization. Furthermore, we provide an overview of Sigma-Delta (ΣΔ\Sigma\Delta) quantization in the compressed sensing context, and also discuss implementation issues, recovery algorithms and performance bounds. As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing and Its Applications", 201
    corecore