295 research outputs found

    Statistically optimum pre- and postfiltering in quantization

    Get PDF
    We consider the optimization of pre- and postfilters surrounding a quantization system. The goal is to optimize the filters such that the mean square error is minimized under the key constraint that the quantization noise variance is directly proportional to the variance of the quantization system input. Unlike some previous work, the postfilter is not restricted to be the inverse of the prefilter. With no order constraint on the filters, we present closed-form solutions for the optimum pre- and postfilters when the quantization system is a uniform quantizer. Using these optimum solutions, we obtain a coding gain expression for the system under study. The coding gain expression clearly indicates that, at high bit rates, there is no loss in generality in restricting the postfilter to be the inverse of the prefilter. We then repeat the same analysis with first-order pre- and postfilters in the form 1+αz-1 and 1/(1+γz^-1 ). In specific, we study two cases: 1) FIR prefilter, IIR postfilter and 2) IIR prefilter, FIR postfilter. For each case, we obtain a mean square error expression, optimize the coefficients α and γ and provide some examples where we compare the coding gain performance with the case of α=γ. In the last section, we assume that the quantization system is an orthonormal perfect reconstruction filter bank. To apply the optimum preand postfilters derived earlier, the output of the filter bank must be wide-sense stationary WSS which, in general, is not true. We provide two theorems, each under a different set of assumptions, that guarantee the wide sense stationarity of the filter bank output. We then propose a suboptimum procedure to increase the coding gain of the orthonormal filter bank

    On Predictive Coding for Erasure Channels Using a Kalman Framework

    Get PDF
    We present a new design method for robust low-delay coding of autoregressive (AR) sources for transmission across erasure channels. It is a fundamental rethinking of existing concepts. It considers the encoder a mechanism that produces signal measurements from which the decoder estimates the original signal. The method is based on linear predictive coding and Kalman estimation at the decoder. We employ a novel encoder state-space representation with a linear quantization noise model. The encoder is represented by the Kalman measurement at the decoder. The presented method designs the encoder and decoder offline through an iterative algorithm based on closed-form minimization of the trace of the decoder state error covariance. The design method is shown to provide considerable performance gains, when the transmitted quantized prediction errors are subject to loss, in terms of signal-to-noise ratio (SNR) compared to the same coding framework optimized for no loss. The design method applies to stationary auto-regressive sources of any order. We demonstrate the method in a framework based on a generalized differential pulse code modulation (DPCM) encoder. The presented principles can be applied to more complicated coding systems that incorporate predictive coding as well

    Asymptotic Task-Based Quantization with Application to Massive MIMO

    Get PDF
    Quantizers take part in nearly every digital signal processing system which operates on physical signals. They are commonly designed to accurately represent the underlying signal, regardless of the specific task to be performed on the quantized data. In systems working with high-dimensional signals, such as massive multiple-input multiple-output (MIMO) systems, it is beneficial to utilize low-resolution quantizers, due to cost, power, and memory constraints. In this work we study quantization of high-dimensional inputs, aiming at improving performance under resolution constraints by accounting for the system task in the quantizers design. We focus on the task of recovering a desired signal statistically related to the high-dimensional input, and analyze two quantization approaches: We first consider vector quantization, which is typically computationally infeasible, and characterize the optimal performance achievable with this approach. Next, we focus on practical systems which utilize hardware-limited scalar uniform analog-to-digital converters (ADCs), and design a task-based quantizer under this model. The resulting system accounts for the task by linearly combining the observed signal into a lower dimension prior to quantization. We then apply our proposed technique to channel estimation in massive MIMO networks. Our results demonstrate that a system utilizing low-resolution scalar ADCs can approach the optimal channel estimation performance by properly accounting for the task in the system design

    New techniques in signal coding

    Get PDF

    Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    Get PDF

    New adaptive pixel decimation for block motion vector estimation

    Get PDF
    Version of RecordPublishe

    Linear Precoding with Low-Resolution DACs for Massive MU-MIMO-OFDM Downlink

    Full text link
    We consider the downlink of a massive multiuser (MU) multiple-input multiple-output (MIMO) system in which the base station (BS) is equipped with low-resolution digital-to-analog converters (DACs). In contrast to most existing results, we assume that the system operates over a frequency-selective wideband channel and uses orthogonal frequency division multiplexing (OFDM) to simplify equalization at the user equipments (UEs). Furthermore, we consider the practically relevant case of oversampling DACs. We theoretically analyze the uncoded bit error rate (BER) performance with linear precoders (e.g., zero forcing) and quadrature phase-shift keying using Bussgang's theorem. We also develop a lower bound on the information-theoretic sum-rate throughput achievable with Gaussian inputs, which can be evaluated in closed form for the case of 1-bit DACs. For the case of multi-bit DACs, we derive approximate, yet accurate, expressions for the distortion caused by low-precision DACs, which can be used to establish lower bounds on the corresponding sum-rate throughput. Our results demonstrate that, for a massive MU-MIMO-OFDM system with a 128-antenna BS serving 16 UEs, only 3--4 DAC bits are required to achieve an uncoded BER of 10^-4 with a negligible performance loss compared to the infinite-resolution case at the cost of additional out-of-band emissions. Furthermore, our results highlight the importance of taking into account the inherent spatial and temporal correlations caused by low-precision DACs
    • 

    corecore