218 research outputs found
Hardware-Limited Task-Based Quantization
Quantization plays a critical role in digital signal processing systems.
Quantizers are typically designed to obtain an accurate digital representation
of the input signal, operating independently of the system task, and are
commonly implemented using serial scalar analog-to-digital converters (ADCs).
In this work, we study hardware-limited task-based quantization, where a system
utilizing a serial scalar ADC is designed to provide a suitable representation
in order to allow the recovery of a parameter vector underlying the input
signal. We propose hardware-limited task-based quantization systems for a fixed
and finite quantization resolution, and characterize their achievable
distortion. We then apply the analysis to the practical setups of channel
estimation and eigen-spectrum recovery from quantized measurements. Our results
illustrate that properly designed hardware-limited systems can approach the
optimal performance achievable with vector quantizers, and that by taking the
underlying task into account, the quantization error can be made negligible
with a relatively small number of bits
Concentric Permutation Source Codes
Permutation codes are a class of structured vector quantizers with a
computationally-simple encoding procedure based on sorting the scalar
components. Using a codebook comprising several permutation codes as subcodes
preserves the simplicity of encoding while increasing the number of
rate-distortion operating points, improving the convex hull of operating
points, and increasing design complexity. We show that when the subcodes are
designed with the same composition, optimization of the codebook reduces to a
lower-dimensional vector quantizer design within a single cone. Heuristics for
reducing design complexity are presented, including an optimization of the rate
allocation in a shape-gain vector quantizer with gain-dependent wrapped
spherical shape codebook
An optimal design procedure for intraband vector quantized subband coding
Journal ArticleAbstTact- Subband coding with vector quantization is addressed in this paper. Forming the data vectors from both between and within the subbands is considered. The former of these two schemes is referred to as interband coding and the latter as intraband coding. Interband coder design is relatively straightforward since the design of the single codebook involved follows readily from a representative set of interband data vectors. Intraband coder design is more complicated since it entails the selection of a vector dimension and a bit-rate for each subband. The main contribution of this work is an optimal methodology for intraband subband vector quantizer design. The problem formulation includes constraints on the bit-rate and the encoding complexity and is solved with nonlinear programming methods. Subband vector quantization image coding in conjunction with a human visual system model is thoroughly investigated. Results of a large number of experiments indicate that the optimal intraband coder yields superior results from quantitative as well as subjective points of view than the interband coder for comparable bit-rates. This improvement becomes more pronounced as the computational complexity of the intraband encoder is allowed to increase
One-pass adaptive universal vector quantization
The authors introduce a one-pass adaptive universal quantization technique for real, bounded alphabet, stationary sources. The algorithm is set on line without any prior knowledge of the statistics of the sources which it might encounter and asymptotically achieves ideal performance on all sources that it sees. The system consists of an encoder and a decoder. At increasing intervals, the encoder refines its codebook using knowledge about incoming data symbols. This codebook is then described to the decoder in the form of updates on the previous codebook. The accuracy to which the codebook is described increases as the number of symbols seen, and thus the accuracy to which the codebook is known, grows
On Predictive Coding for Erasure Channels Using a Kalman Framework
We present a new design method for robust low-delay coding of autoregressive (AR) sources for transmission across erasure channels. It is a fundamental rethinking of existing concepts. It considers the encoder a mechanism that produces signal measurements from which the decoder estimates the original signal. The method is based on linear predictive coding and Kalman estimation at the decoder. We employ a novel encoder state-space representation with a linear quantization noise model. The encoder is represented by the Kalman measurement at the decoder. The presented method designs the encoder and decoder offline through an iterative algorithm based on closed-form minimization of the trace of the decoder state error covariance. The design method is shown to provide considerable performance gains, when the transmitted quantized prediction errors are subject to loss, in terms of signal-to-noise ratio (SNR) compared to the same coding framework optimized for no loss. The design method applies to stationary auto-regressive sources of any order. We demonstrate the method in a framework based on a generalized differential pulse code modulation (DPCM) encoder. The presented principles can be applied to more complicated coding systems that incorporate predictive coding as well
Efficient compression of motion compensated residuals
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Image coding using entropy-constrained residual vector quantization
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates
- …