214 research outputs found
A Low-Complexity and Asymptotically Optimal Coding Strategy for Gaussian Vector Sources
In this paper, we present a low-complexity coding strategy to encode (compress) finite-length data blocks of Gaussian vector sources. We show that for large enough data blocks of a Gaussian asymptotically wide sense stationary (AWSS) vector source, the rate of the coding strategy tends to the lowest possible rate. Besides being a low-complexity strategy it does not require the knowledge of the correlation matrix of such data blocks. We also show that this coding strategy is appropriate to encode the most relevant Gaussian vector sources, namely, wide sense stationary (WSS), moving average (MA), autoregressive (AR), and ARMA vector sources
Theory of optimal orthonormal subband coders
The theory of the orthogonal transform coder and methods for its optimal design have been known for a long time. We derive a set of necessary and sufficient conditions for the coding-gain optimality of an orthonormal subband coder for given input statistics. We also show how these conditions can be satisfied by the construction of a sequence of optimal compaction filters one at a time. Several theoretical properties of optimal compaction filters and optimal subband coders are then derived, especially pertaining to behavior as the number of subbands increases. Significant theoretical differences between optimum subband coders, transform coders, and predictive coders are summarized. Finally, conditions are presented under which optimal orthonormal subband coders yield as much coding gain as biorthogonal ones for a fixed number of subbands
On the asymptotic optimality of a low-complexity coding strategy for WSS, MA, and AR vector sources
In this paper, we study the asymptotic optimality of a low-complexity coding strategy for
Gaussian vector sources. Specifically, we study the convergence speed of the rate of such a coding
strategy when it is used to encode the most relevant vector sources, namely wide sense stationary
(WSS), moving average (MA), and autoregressive (AR) vector sources. We also study how the coding
strategy considered performs when it is used to encode perturbed versions of those relevant sources.
More precisely, we give a sufficient condition for such perturbed versions so that the convergence
speed of the rate remains unaltered
Rate-distortion function upper bounds for Gaussian vectors and their applications in coding AR sources
source coding; rate-distortion function (RDF); Gaussian vector; autoregressive (AR)
source; discrete Fourier transform (DFT
Results on lattice vector quantization with dithering
The statistical properties of the error in uniform scalar quantization have been analyzed by a number of authors in the past, and is a well-understood topic today. The analysis has also been extended to the case of dithered quantizers, and the advantages and limitations of dithering have been studied and well documented in the literature. Lattice vector quantization is a natural extension into multiple dimensions of the uniform scalar quantization. Accordingly, there is a natural extension of the analysis of the quantization error. It is the purpose of this paper to present this extension and to elaborate on some of the new aspects that come with multiple dimensions. We show that, analogous to the one-dimensional case, the quantization error vector can be rendered independent of the input in subtractive vector-dithering. In this case, the total mean square error is a function of only the underlying lattice and there are lattices that minimize this error. We give a necessary condition on such lattices. In nonsubtractive vector dithering, we show how to render moments of the error vector independent of the input by using appropriate dither random vectors. These results can readily be applied for the case of wide sense stationary (WSS) vector random processes, by use of iid dither sequences. We consider the problem of pre- and post-filtering around a dithered lattice quantifier, and show how these filters should be designed in order to minimize the overall quantization error in the mean square sense. For the special case where the WSS vector process is obtained by blocking a WSS scalar process, the optimum prefilter matrix reduces to the blocked version of the well-known scalar half-whitening filter
Asymptotic Task-Based Quantization with Application to Massive MIMO
Quantizers take part in nearly every digital signal processing system which
operates on physical signals. They are commonly designed to accurately
represent the underlying signal, regardless of the specific task to be
performed on the quantized data. In systems working with high-dimensional
signals, such as massive multiple-input multiple-output (MIMO) systems, it is
beneficial to utilize low-resolution quantizers, due to cost, power, and memory
constraints. In this work we study quantization of high-dimensional inputs,
aiming at improving performance under resolution constraints by accounting for
the system task in the quantizers design. We focus on the task of recovering a
desired signal statistically related to the high-dimensional input, and analyze
two quantization approaches: We first consider vector quantization, which is
typically computationally infeasible, and characterize the optimal performance
achievable with this approach. Next, we focus on practical systems which
utilize hardware-limited scalar uniform analog-to-digital converters (ADCs),
and design a task-based quantizer under this model. The resulting system
accounts for the task by linearly combining the observed signal into a lower
dimension prior to quantization. We then apply our proposed technique to
channel estimation in massive MIMO networks. Our results demonstrate that a
system utilizing low-resolution scalar ADCs can approach the optimal channel
estimation performance by properly accounting for the task in the system
design
Sampling versus Random Binning for Multiple Descriptions of a Bandlimited Source
Random binning is an efficient, yet complex, coding technique for the
symmetric L-description source coding problem. We propose an alternative
approach, that uses the quantized samples of a bandlimited source as
"descriptions". By the Nyquist condition, the source can be reconstructed if
enough samples are received. We examine a coding scheme that combines sampling
and noise-shaped quantization for a scenario in which only K < L descriptions
or all L descriptions are received. Some of the received K-sets of descriptions
correspond to uniform sampling while others to non-uniform sampling. This
scheme achieves the optimum rate-distortion performance for uniform-sampling
K-sets, but suffers noise amplification for nonuniform-sampling K-sets. We then
show that by increasing the sampling rate and adding a random-binning stage,
the optimal operation point is achieved for any K-set.Comment: Presented at the ITW'13. 5 pages, two-column mode, 3 figure
- …