16 research outputs found

    New Directions in Lattice Based Lossy Compression

    Get PDF

    Implantation sur dsp d'une methode rapide d'indexage pour la quantification vectorielle algebrique

    Get PDF
    Le choix d'une méthode de compression est souvent contraint par son coût calcul et son coût mémoire. La quantification vectorielle algébrique présente l'avantage d'être très rapide et ne nécessite ni la construction ni le stockage d'un dictionnaire comme dans le cas des méthodes de classification. Ces caractéristiques la rendent particulièrement adaptée aux applications de codage bas débit. Cette méthode est cependant délicate à mettre en oeuvre, en particulier en ce qui concerne l'indexage des vecteurs quantifiés. Nous avons proposé récemment un nouvel algorithme d'indexage basé sur un compromis efficace coût calcul - coût mémoire. Le but de cet article est de définir une architecture de type DSP adaptée à cet algorithme et permettant d'envisager des applications bas débit avec une transmission proche du temps réel

    Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    Get PDF

    Nouvelles techniques de quantification vectorielle algébrique basées sur le codage de Voronoi : application au codage AMR-WB+

    Get PDF
    L'objet de cette thèse est l'étude de la quantification (vectorielle) par réseau de points et de son application au modèle de codage audio ACELP/TCX multi-mode. Le modèle ACELP/TCX constitue une solution possible au problème du codage audio universel---par codage universel, on entend la représentation unifiée de bonne qualité des signaux de parole et de musique à différents débits et fréquences d'échantillonnage. On considère ici comme applications la quantification des coefficients de prédiction linéaire et surtout le codage par transformée au sein du modèle TCX; l'application au codage TCX a un fort intérêt pratique, car le modèle TCX conditionne en grande partie le caractère universel du codage ACELP/TCX. La quantification par réseau de points est une technique de quantification par contrainte, exploitant la structure linéaire des réseaux réguliers. Elle a toujours été considérée, par rapport à la quantification vectorielle non structurée, comme une technique prometteuse du fait de sa complexité réduite (en stockage et quantité de calculs). On montre ici qu'elle possède d'autres avantages importants: elle rend possible la construction de codes efficaces en dimension relativement élevée et à débit arbitrairement élevé, adaptés au codage multi-débit (par transformée ou autre); en outre, elle permet de ramener la distorsion à la seule erreur granulaire au prix d'un codage à débit variable. Plusieurs techniques de quantification par réseau de points sont présentées dans cette thèse. Elles sont toutes élaborées à partir du codage de Voronoï. Le codage de Voronoï quasi-ellipsoïdal est adapté au codage d'une source gaussienne vectorielle dans le contexte du codage paramétrique de coefficients de prédiction linéaire à l'aide d'un modèle de mélange gaussien. La quantification vectorielle multi-débit par extension de Voronoï ou par codage de Voronoï à troncature adaptative est adaptée au codage audio par transformée multi-débit. L'application de la quantification vectorielle multi-débit au codage TCX est plus particulièrement étudiée. Une nouvelle technique de codage algébrique de la cible TCX est ainsi conçue à partir du principe d'allocation des bits par remplissage inverse des eaux

    Voronoi Constellations for Coherent Fiber-Optic Communication Systems

    Get PDF
    The increasing demand for higher data rates is driving the adoption of high-spectral-efficiency (SE) transmission in communication systems. The well-known 1.53 dB gap between Shannon\u27s capacity and the mutual information (MI) of uniform quadrature amplitude modulation (QAM) formats indicates the importance of power efficiency, particularly in high-SE transmission scenarios, such as fiber-optic communication systems and wireless backhaul links. Shaping techniques are the only way to close this gap, by adapting the uniform input distribution to the capacity-achieving distribution. The two categories of shaping are probabilistic shaping (PS) and geometric shaping (GS). Various methods have been proposed for performing PS and GS, each with distinct implementation complexity and performance characteristics. In general, the complexity of these methods grows dramatically with the SE and number of dimensions.Among different methods, multidimensional Voronoi constellations (VCs) provide a good trade-off between high shaping gains and low-complexity encoding/decoding algorithms due to their nice geometric structures. However, VCs with high shaping gains are usually very large and the huge cardinality makes system analysis and design cumbersome, which motives this thesis.In this thesis, we develop a set of methods to make VCs applicable to communication systems with a low complexity. The encoding and decoding, labeling, and coded modulation schemes of VCs are investigated. Various system performance metrics including uncoded/coded bit error rate, MI, and generalized mutual information (GMI) are studied and compared with QAM formats for both the additive white Gaussian noise channel and nonlinear fiber channels. We show that the proposed methods preserve high shaping gains of VCs, enabling significant improvements on system performance for high-SE transmission in both the additive white Gaussian noise channel and nonlinear fiber channel. In addition, we propose general algorithms for estimating the MI and GMI, and approximating the log-likelihood ratios in soft-decision forward error correction codes for very large constellations

    Structural Results for Coding Over Communication Networks

    Full text link
    We study the structure of optimality achieving codes in network communications. The thesis consists of two parts: in the first part, we investigate the role of algebraic structure in the performance of communication strategies. In chapter two, we provide a linear coding scheme for the multiple-descriptions source coding problem which improves upon the performance of the best known unstructured coding scheme. In chapter three, we propose a new method for lattice-based codebook generation. The new method leads to a simplification in the analysis of the performance of lattice codes in continuous-alphabet communication. In chapter four, we show that although linear codes are necessary to achieve optimality in certain problems, loosening the closure restriction in the codebook leads to gains in other network communication settings. We introduce a new class of structured codes called quasi-linear codes (QLC). These codes cover the whole spectrum between unstructured codes and linear codes. We develop coding strategies in the interference channel and the multiple-descriptions problems using QLCs which outperform the previous schemes. In the second part, which includes the last two chapters, we consider a different structural restriction on codes used in network communication. Namely, we limit the `effective length' of these codes. First, we consider an arbitrary pair of Boolean functions which operate on two sequences of correlated random variables. We derive a new upper-bound on the correlation between the outputs of these functions. The upper-bound is presented as a function of the `dependency spectrum' of the corresponding Boolean functions. Next, we investigate binary block-codes (BBC). A BBC is defined as a vector of Boolean functions. We consider BBCs which are generated randomly, and using single-letter distributions. We characterize the vector of dependency spectrums of these BBCs. This gives an upper-bound on the correlation between the outputs of two distributed BBCs. Finally, the upper-bound is used to show that the large blocklength single-letter coding schemes in the literature are sub-optimal in various multiterminal communication settings.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137059/1/fshirani_1.pd

    Quantization in acquisition and computation networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-165).In modern systems, it is often desirable to extract relevant information from large amounts of data collected at different spatial locations. Applications include sensor networks, wearable health-monitoring devices and a variety of other systems for inference. Several existing source coding techniques, such as Slepian-Wolf and Wyner-Ziv coding, achieve asymptotic compression optimality in distributed systems. However, these techniques are rarely used in sensor networks because of decoding complexity and prohibitively long code length. Moreover, the fundamental limits that arise from existing techniques are intractable to describe for a complicated network topology or when the objective of the system is to perform some computation on the data rather than to reproduce the data. This thesis bridges the technological gap between the needs of real-world systems and the optimistic bounds derived from asymptotic analysis. Specifically, we characterize fundamental trade-offs when the desired computation is incorporated into the compression design and the code length is one. To obtain both performance guarantees and achievable schemes, we use high-resolution quantization theory, which is complementary to the Shannon-theoretic analyses previously used to study distributed systems. We account for varied network topologies, such as those where sensors are allowed to collaborate or the communication links are heterogeneous. In these settings, a small amount of intersensor communication can provide a significant improvement in compression performance. As a result, this work suggests new compression principles and network design for modern distributed systems. Although the ideas in the thesis are motivated by current and future sensor network implementations, the framework applies to a wide range of signal processing questions. We draw connections between the fidelity criteria studied in the thesis and distortion measures used in perceptual coding. As a consequence, we determine the optimal quantizer for expected relative error (ERE), a measure that is widely useful but is often neglected in the source coding community. We further demonstrate that applying the ERE criterion to psychophysical models can explain the Weber-Fechner law, a longstanding hypothesis of how humans perceive the external world. Our results are consistent with the hypothesis that human perception is Bayesian optimal for information acquisition conditioned on limited cognitive resources, thereby supporting the notion that the brain is efficient at acquisition and adaptation.by John Z. Sun.Ph.D

    Centralized Cell-Free Massive MIMO with Low-Resolution Fronthaul

    Get PDF
    The increasingly new data-hungry applications in our digital society now might no longer be handled efficiently by the current cellular networks. Cell-free massive MIMO network comes to resolve the traditional way of deploying wireless networks by blurring the cell boundaries. The network comprises a large number of access points (APs) which connect the users to a central processing unit (CPU) via fronthauls for coherent transmission and reception. It is expected that this network can provide a uniformly high data rate per user and per unit area. In this thesis, we study a centralized approach to cell-free massive MIMO that can further exploit its potential with considering a practical issue of limited-capacity fronthauls. We develop different schemes as well as strategies that make the centralized approach feasible. Thereby, we propose the use of low-resolution fronthauls and analyse its performance by making use of Bussgang theorem. The first part of this thesis considers a cell-free network with single-antenna APs, where a coarse scalar uniform quantizer is devised as an interface to the fronthauls. In the second part of this thesis, we extend the network to the case of multi-antenna APs, where two different processing schemes at the APs are studied: individual processing and joint processing. For each part, two strategies for acquiring the channel state information (CSI) under low-resolution fronthaul constraint are developed: estimate-and-quantize (EQ) and quantize-and-estimate (QE). We analyse the performance of both strategies and take them into account for deriving the achievable rate of the systems. Moreover, the scalability of the centralized approach is also discussed in terms of fronthaul load and AP processing. In the last part, we propose the use of a lattice vector quantizer at multi-antenna APs for the high-mobility and high-density scenario, in which two procedures for constructing the lattice codebook are developed
    corecore