189 research outputs found

    Suboptimality of the Karhunen-Loève transform for transform coding

    Get PDF
    We examine the performance of the Karhunen-Loeve transform (KLT) for transform coding applications. The KLT has long been viewed as the best available block transform for a system that orthogonally transforms a vector source, scalar quantizes the components of the transformed vector using optimal bit allocation, and then inverse transforms the vector. This paper treats fixed-rate and variable-rate transform codes of non-Gaussian sources. The fixed-rate approach uses an optimal fixed-rate scalar quantizer to describe the transform coefficients; the variable-rate approach uses a uniform scalar quantizer followed by an optimal entropy code, and each quantized component is encoded separately. Earlier work shows that for the variable-rate case there exist sources on which the KLT is not unique and the optimal quantization and coding stage matched to a "worst" KLT yields performance as much as 1.5 dB worse than the optimal quantization and coding stage matched to a "best" KLT. In this paper, we strengthen that result to show that in both the fixed-rate and the variable-rate coding frameworks there exist sources for which the performance penalty for using a "worst" KLT can be made arbitrarily large. Further, we demonstrate in both frameworks that there exist sources for which even a best KLT gives suboptimal performance. Finally, we show that even for vector sources where the KLT yields independent coefficients, the KLT can be suboptimal for fixed-rate coding

    Spatial Whitening Framework for Distributed Estimation

    Full text link
    Designing resource allocation strategies for power constrained sensor network in the presence of correlated data often gives rise to intractable problem formulations. In such situations, applying well-known strategies derived from conditional-independence assumption may turn out to be fairly suboptimal. In this paper, we address this issue by proposing an adjacency-based spatial whitening scheme, where each sensor exchanges its observation with their neighbors prior to encoding their own private information and transmitting it to the fusion center. We comment on the computational limitations for obtaining the optimal whitening transformation, and propose an iterative optimization scheme to achieve the same for large networks. We demonstrate the efficacy of the whitening framework by considering the example of bit-allocation for distributed estimation.Comment: 4 pages, 2 figures, this paper has been presented at CAMSAP 2011; Proc. 4th Intl. Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP 2011), San Juan, Puerto Rico, Dec 13-16, 201

    ONE-BIT QUANTIZER PARAMETRIZATION FOR ARBITRARY LAPLACIAN SOURCES

    Get PDF
    In this paper we suggest an exact formula for the total distortion of one-bit quantizer and for the arbitrary Laplacian probability density function (pdf). Suggested formula additionally extends normalized case of zero mean and unit variance, which is the most applied quantization case not only in traditional quantization rather in contemporary solutions that involve quantization. Additionally symmetrical quantizer’s representation levels are calculated from minimal distortion criteria. Note that one-bit quantization is the most sensitive quantization from the standpoint of accuracy degradation and quantization error, thus increasing importance of the suggested parameterization of one-bit quantizer

    Optimal Causal Rate-Constrained Sampling of the Wiener Process

    Get PDF
    We consider the following communication scenario. An encoder causally observes the Wiener process and decides when and what to transmit about it. A decoder makes real-time estimation of the process using causally received codewords. We determine the causal encoding and decoding policies that jointly minimize the mean-square estimation error, under the long-term communication rate constraint of R bits per second. We show that an optimal encoding policy can be implemented as a causal sampling policy followed by a causal compressing policy. We prove that the optimal encoding policy samples the Wiener process once the innovation passes either √(1/R) or −√(1/R), and compresses the sign of the innovation (SOI) using a 1-bit codeword. The SOI coding scheme achieves the operational distortion-rate function, which is equal to D^(op)(R)=1/(6R). Surprisingly, this is significantly better than the distortion-rate tradeoff achieved in the limit of infinite delay by the best non-causal code. This is because the SOI coding scheme leverages the free timing information supplied by the zero-delay channel between the encoder and the decoder. The key to unlock that gain is the event-triggered nature of the SOI sampling policy. In contrast, the distortion-rate tradeoffs achieved with deterministic sampling policies are much worse: we prove that the causal informational distortion-rate function in that scenario is as high as D_(DET)(R)=5/(6R). It is achieved by the uniform sampling policy with the sampling interval 1/R. In either case, the optimal strategy is to sample the process as fast as possible and to transmit 1-bit codewords to the decoder without delay

    Concentric Permutation Source Codes

    Full text link
    Permutation codes are a class of structured vector quantizers with a computationally-simple encoding procedure based on sorting the scalar components. Using a codebook comprising several permutation codes as subcodes preserves the simplicity of encoding while increasing the number of rate-distortion operating points, improving the convex hull of operating points, and increasing design complexity. We show that when the subcodes are designed with the same composition, optimization of the codebook reduces to a lower-dimensional vector quantizer design within a single cone. Heuristics for reducing design complexity are presented, including an optimization of the rate allocation in a shape-gain vector quantizer with gain-dependent wrapped spherical shape codebook

    The Quadratic Gaussian Rate-Distortion Function for Source Uncorrelated Distortions

    Full text link
    We characterize the rate-distortion function for zero-mean stationary Gaussian sources under the MSE fidelity criterion and subject to the additional constraint that the distortion is uncorrelated to the input. The solution is given by two equations coupled through a single scalar parameter. This has a structure similar to the well known water-filling solution obtained without the uncorrelated distortion restriction. Our results fully characterize the unique statistics of the optimal distortion. We also show that, for all positive distortions, the minimum achievable rate subject to the uncorrelation constraint is strictly larger than that given by the un-constrained rate-distortion function. This gap increases with the distortion and tends to infinity and zero, respectively, as the distortion tends to zero and infinity.Comment: Revised version, to be presented at the Data Compression Conference 200

    Source Coding Optimization for Distributed Average Consensus

    Full text link
    Consensus is a common method for computing a function of the data distributed among the nodes of a network. Of particular interest is distributed average consensus, whereby the nodes iteratively compute the sample average of the data stored at all the nodes of the network using only near-neighbor communications. In real-world scenarios, these communications must undergo quantization, which introduces distortion to the internode messages. In this thesis, a model for the evolution of the network state statistics at each iteration is developed under the assumptions of Gaussian data and additive quantization error. It is shown that minimization of the communication load in terms of aggregate source coding rate can be posed as a generalized geometric program, for which an equivalent convex optimization can efficiently solve for the global minimum. Optimization procedures are developed for rate-distortion-optimal vector quantization, uniform entropy-coded scalar quantization, and fixed-rate uniform quantization. Numerical results demonstrate the performance of these approaches. For small numbers of iterations, the fixed-rate optimizations are verified using exhaustive search. Comparison to the prior art suggests competitive performance under certain circumstances but strongly motivates the incorporation of more sophisticated coding strategies, such as differential, predictive, or Wyner-Ziv coding.Comment: Master's Thesis, Electrical Engineering, North Carolina State Universit
    corecore