448 research outputs found

    Higher order conditional entropy-constrained trellis-coded RVQ withapplication to pyramid image coding

    Get PDF
    This paper introduces an extension of conditional entropy-constrained residual vector quantization (CEC-RVQ) to include quantization cell shape gain. The method is referred to as conditional entropy-constrained trellis-coded RVQ (CEC-TCRVQ). The new design is based on coding image vectors by taking into account their 2D correlation and employing a higher order entropy model with a trellis structure. We employed CEC-TCRVQ to code image subbands at low bit rate. The CEC-TCRVQ coded images do well in term of preserving low-magnitude textures present in some image

    Deep Multiple Description Coding by Learning Scalar Quantization

    Full text link
    In this paper, we propose a deep multiple description coding framework, whose quantizers are adaptively learned via the minimization of multiple description compressive loss. Firstly, our framework is built upon auto-encoder networks, which have multiple description multi-scale dilated encoder network and multiple description decoder networks. Secondly, two entropy estimation networks are learned to estimate the informative amounts of the quantized tensors, which can further supervise the learning of multiple description encoder network to represent the input image delicately. Thirdly, a pair of scalar quantizers accompanied by two importance-indicator maps is automatically learned in an end-to-end self-supervised way. Finally, multiple description structural dissimilarity distance loss is imposed on multiple description decoded images in pixel domain for diversified multiple description generations rather than on feature tensors in feature domain, in addition to multiple description reconstruction loss. Through testing on two commonly used datasets, it is verified that our method is beyond several state-of-the-art multiple description coding approaches in terms of coding efficiency.Comment: 8 pages, 4 figures. (DCC 2019: Data Compression Conference). Testing datasets for "Deep Optimized Multiple Description Image Coding via Scalar Quantization Learning" can be found in the website of https://github.com/mdcnn/Deep-Multiple-Description-Codin

    Multiple Description Trellis-Coded Quantization of Sinusoidal Parameters

    Get PDF

    A Study of trellis coded quantization for image compression

    Get PDF
    Trellis coded quantization has recently evolved as a powerful quantization technique in the world of lossy image compression. The aim of this thesis is to investigate the potential of trellis coded quantization in conjunction with two of the most popular image transforms today; the discrete cosine transform and the discrete wavelet trans form. Trellis coded quantization is compared with traditional scalar quantization. The 4-state and the 8-state trellis coded quantizers are compared in an attempt to come up with a quantifiable difference in their performances. The use of pdf-optimized quantizers for trellis coded quantization is also studied. Results for the simulations performed on two gray-scale images at an uncoded bit rate of 0.48 bits/pixel are presented by way of reconstructed images and the respective peak signal-to-noise ratios. It is evident from the results obtained that trellis coded quantization outperforms scalar quantization in both the discrete cosine transform and the discrete wavelet transform domains. The reconstructed images suggest that there does not seem to be any considerable gain in going from a 4-state to a 8-state trellis coded quantizer. Results also suggest that considerable gain can be had by employing pdf-optimized quantizers for trellis coded quantization instead of uniform quantizers

    Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping

    Get PDF
    In this paper, we provide for the first time a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. As the blocklength decreases, however, the resulting rate loss diminishes the efficiency of CCDM. We claim that for such short blocklengths and over the additive white Gaussian channel (AWGN), the objective of shaping should be reformulated as obtaining the most energy-efficient signal space for a given rate (rather than matching distributions). In light of this interpretation, multiset-partition DM (MPDM), enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as energy-efficient shaping techniques. Numerical results show that MPDM and SpSh have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize the energy efficiency--is shown to have the minimum rate loss amongst all. We provide simulation results of the end-to-end decoding performance showing that up to 1 dB improvement in power efficiency over uniform signaling can be obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a discussion on the complexity of these algorithms from the perspective of latency, storage and computations.Comment: 18 pages, 10 figure

    Multiple Description Quantization of Sinusoidal Parameters

    Get PDF
    corecore