47 research outputs found

    Goal-Oriented Quantization: Analysis, Design, and Application to Resource Allocation

    Full text link
    In this paper, the situation in which a receiver has to execute a task from a quantized version of the information source of interest is considered. The task is modeled by the minimization problem of a general goal function f(x;g)f(x;g) for which the decision xx has to be taken from a quantized version of the parameters gg. This problem is relevant in many applications e.g., for radio resource allocation (RA), high spectral efficiency communications, controlled systems, or data clustering in the smart grid. By resorting to high resolution (HR) analysis, it is shown how to design a quantizer that minimizes the gap between the minimum of ff (which would be reached by knowing gg perfectly) and what is effectively reached with a quantized gg. The conducted formal analysis both provides quantization strategies in the HR regime and insights for the general regime and allows a practical algorithm to be designed. The analysis also allows one to provide some elements to the new and fundamental problem of the relationship between the goal function regularity properties and the hardness to quantize its parameters. The derived results are discussed and supported by a rich numerical performance analysis in which known RA goal functions are studied and allows one to exhibit very significant improvements by tailoring the quantization operation to the final task

    On Predictive Coding for Erasure Channels Using a Kalman Framework

    Get PDF
    We present a new design method for robust low-delay coding of autoregressive (AR) sources for transmission across erasure channels. It is a fundamental rethinking of existing concepts. It considers the encoder a mechanism that produces signal measurements from which the decoder estimates the original signal. The method is based on linear predictive coding and Kalman estimation at the decoder. We employ a novel encoder state-space representation with a linear quantization noise model. The encoder is represented by the Kalman measurement at the decoder. The presented method designs the encoder and decoder offline through an iterative algorithm based on closed-form minimization of the trace of the decoder state error covariance. The design method is shown to provide considerable performance gains, when the transmitted quantized prediction errors are subject to loss, in terms of signal-to-noise ratio (SNR) compared to the same coding framework optimized for no loss. The design method applies to stationary auto-regressive sources of any order. We demonstrate the method in a framework based on a generalized differential pulse code modulation (DPCM) encoder. The presented principles can be applied to more complicated coding systems that incorporate predictive coding as well

    Multiantenna Wireless Architectures with Low Precision Converters

    Get PDF
    One of the main key technology enablers of the next generation of wireless communications is massive multiple input multiple output (MIMO), in which the number of antennas at the base station (BS) is scaled up to the order of tens or hundreds. It provides considerable energy and spectral efficiency by spatial multiplexing, which enables serving multiple user equipments (UEs) on the same time and frequency resource. However, the deployment of such large-scale systems could be challenging and this thesis is aimed at studying one of the challenges in the optimal implementation of such systems. More specifically, we consider a fully digital setup, in which each antenna at the BS is connected to a pair of data converters through a radio-frequency (RF) chain, all located at the remote radio head (RRH), and there is a limitation on the capacity of the fronthaul link, which connects the RRH to the baseband unit (BBU), where digital signal processing is performed. The fronthaul capacity limitation calls for a trade-off between some of the design parameters, including the number of antennas, the resolution of data converters and the over-sampling ratio. In this thesis, we study the aforementioned trade-off considering the first two design parameters.First, we consider a quasi-static scenario, in which the fading coefficients do not change throughout the transmission of a codeword. The channel state information (CSI) is assumed to be unknown at the BS, and it is acquired through pilot transmission. We develop a framework based on the mismatched decoding rule to find lower bounds on the achievable rates. The bi-directional rate at 10% outage probability is selected as the performance metric to determine the recommended architecture in terms of number of antennas and the resolution of data converters. Second, we adapt our framework to a finite blocklength regime, considering a realistic mm-wave multi-user clustered MIMO channel model and a well suited channel estimation algorithm. We start our derivations by considering random coding union bound with parameter s (RCUs) and apply approximations to derive the corresponding normal approximation and further, an easy to compute outage with correction bound. We illustrate the accuracy of our approximations, and use the outage with correction bound to investigate the optimal architecture in terms of the number of antennas and the resolution of the data converters.Our result show that at low signal to noise (SNR) regime, we benefit from lowering the resolution of the data converters and increasing the number of antennas, while at high SNR for a practical scenario, the optimal architecture could move to 3 or 4 bits of resolution since we are not in demand of large array gain anymore

    Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    Get PDF

    Operational Rate-Distortion Performance of Single-source and Distributed Compressed Sensing

    Get PDF
    We consider correlated and distributed sources without cooperation at the encoder. For these sources, we derive the best achievable performance in the rate-distortion sense of any distributed compressed sensing scheme, under the constraint of high--rate quantization. Moreover, under this model we derive a closed--form expression of the rate gain achieved by taking into account the correlation of the sources at the receiver and a closed--form expression of the average performance of the oracle receiver for independent and joint reconstruction. Finally, we show experimentally that the exploitation of the correlation between the sources performs close to optimal and that the only penalty is due to the missing knowledge of the sparsity support as in (non distributed) compressed sensing. Even if the derivation is performed in the large system regime, where signal and system parameters tend to infinity, numerical results show that the equations match simulations for parameter values of practical interest.Comment: To appear in IEEE Transactions on Communication

    Structural Results for Coding Over Communication Networks

    Full text link
    We study the structure of optimality achieving codes in network communications. The thesis consists of two parts: in the first part, we investigate the role of algebraic structure in the performance of communication strategies. In chapter two, we provide a linear coding scheme for the multiple-descriptions source coding problem which improves upon the performance of the best known unstructured coding scheme. In chapter three, we propose a new method for lattice-based codebook generation. The new method leads to a simplification in the analysis of the performance of lattice codes in continuous-alphabet communication. In chapter four, we show that although linear codes are necessary to achieve optimality in certain problems, loosening the closure restriction in the codebook leads to gains in other network communication settings. We introduce a new class of structured codes called quasi-linear codes (QLC). These codes cover the whole spectrum between unstructured codes and linear codes. We develop coding strategies in the interference channel and the multiple-descriptions problems using QLCs which outperform the previous schemes. In the second part, which includes the last two chapters, we consider a different structural restriction on codes used in network communication. Namely, we limit the `effective length' of these codes. First, we consider an arbitrary pair of Boolean functions which operate on two sequences of correlated random variables. We derive a new upper-bound on the correlation between the outputs of these functions. The upper-bound is presented as a function of the `dependency spectrum' of the corresponding Boolean functions. Next, we investigate binary block-codes (BBC). A BBC is defined as a vector of Boolean functions. We consider BBCs which are generated randomly, and using single-letter distributions. We characterize the vector of dependency spectrums of these BBCs. This gives an upper-bound on the correlation between the outputs of two distributed BBCs. Finally, the upper-bound is used to show that the large blocklength single-letter coding schemes in the literature are sub-optimal in various multiterminal communication settings.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137059/1/fshirani_1.pd
    corecore