2,102 research outputs found

    Vector quantization

    Get PDF
    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts

    Perceptually-Driven Video Coding with the Daala Video Codec

    Full text link
    The Daala project is a royalty-free video codec that attempts to compete with the best patent-encumbered codecs. Part of our strategy is to replace core tools of traditional video codecs with alternative approaches, many of them designed to take perceptual aspects into account, rather than optimizing for simple metrics like PSNR. This paper documents some of our experiences with these tools, which ones worked and which did not. We evaluate which tools are easy to integrate into a more traditional codec design, and show results in the context of the codec being developed by the Alliance for Open Media.Comment: 19 pages, Proceedings of SPIE Workshop on Applications of Digital Image Processing (ADIP), 201

    Rate-distortion adaptive vector quantization for wavelet imagecoding

    Get PDF
    We propose a wavelet image coding scheme using rate-distortion adaptive tree-structured residual vector quantization. Wavelet transform coefficient coding is based on the pyramid hierarchy (zero-tree), but rather than determining the zero-tree relation from the coarsest subband to the finest by hard thresholding, the prediction in our scheme is achieved by rate-distortion optimization with adaptive vector quantization on the wavelet coefficients from the finest subband to the coarsest. The proposed method involves only integer operations and can be implemented with very low computational complexity. The preliminary experiments have shown some encouraging results: a PSNR of 30.93 dB is obtained at 0.174 bpp on the test image LENA (512×512

    Polarization of the Renyi Information Dimension with Applications to Compressed Sensing

    Full text link
    In this paper, we show that the Hadamard matrix acts as an extractor over the reals of the Renyi information dimension (RID), in an analogous way to how it acts as an extractor of the discrete entropy over finite fields. More precisely, we prove that the RID of an i.i.d. sequence of mixture random variables polarizes to the extremal values of 0 and 1 (corresponding to discrete and continuous distributions) when transformed by a Hadamard matrix. Further, we prove that the polarization pattern of the RID admits a closed form expression and follows exactly the Binary Erasure Channel (BEC) polarization pattern in the discrete setting. We also extend the results from the single- to the multi-terminal setting, obtaining a Slepian-Wolf counterpart of the RID polarization. We discuss applications of the RID polarization to Compressed Sensing of i.i.d. sources. In particular, we use the RID polarization to construct a family of deterministic ±1\pm 1-valued sensing matrices for Compressed Sensing. We run numerical simulations to compare the performance of the resulting matrices with that of random Gaussian and random Hadamard matrices. The results indicate that the proposed matrices afford competitive performances while being explicitly constructed.Comment: 12 pages, 2 figure

    Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    Get PDF
    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction

    Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks

    Full text link
    Quantized Neural Networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs, parameters and activations are uniformly quantized, such that the multiplications and additions can be accelerated by bitwise operations. However, distributions of parameters in Neural Networks are often imbalanced, such that the uniform quantization determined from extremal values may under utilize available bitwidth. In this paper, we propose a novel quantization method that can ensure the balance of distributions of quantized values. Our method first recursively partitions the parameters by percentiles into balanced bins, and then applies uniform quantization. We also introduce computationally cheaper approximations of percentiles to reduce the computation overhead introduced. Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both Convolutional Neural Networks and Recurrent Neural Networks. Experiments on standard datasets including ImageNet and Penn Treebank confirm the effectiveness of our method. On ImageNet, the top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7\%, which is superior to the state-of-the-arts of QNNs

    Introduction to Transformers: an NLP Perspective

    Full text link
    Transformers have dominated empirical machine learning models of natural language processing. In this paper, we introduce basic concepts of Transformers and present key techniques that form the recent advances of these models. This includes a description of the standard Transformer architecture, a series of model refinements, and common applications. Given that Transformers and related deep learning techniques might be evolving in ways we have never seen, we cannot dive into all the model details or cover all the technical areas. Instead, we focus on just those concepts that are helpful for gaining a good understanding of Transformers and their variants. We also summarize the key ideas that impact this field, thereby yielding some insights into the strengths and limitations of these models.Comment: 119 pages and 21 figure
    corecore