1,217 research outputs found

    Trellis-coded quantization with unequal distortion.

    Get PDF
    Kwong Cheuk Fai.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 72-74).Abstracts in English and Chinese.Acknowledgements --- p.iAbstract --- p.iiTable of Contents --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Quantization --- p.2Chapter 1.2 --- Trellis-Coded Quantization --- p.3Chapter 1.3 --- Thesis Organization --- p.4Chapter 2 --- Trellis-Coded Modulation --- p.6Chapter 2.1 --- Convolutional Codes --- p.7Chapter 2.1.1 --- Generator Polynomials and Generator Matrix --- p.9Chapter 2.1.2 --- Circuit Diagram --- p.10Chapter 2.1.3 --- State Transition Diagram --- p.11Chapter 2.1.4 --- Trellis Diagram --- p.12Chapter 2.2 --- Trellis-Coded Modulation --- p.13Chapter 2.2.1 --- Uncoded Transmission verses TCM --- p.14Chapter 2.2.2 --- Trellis Representation --- p.17Chapter 2.2.3 --- Ungerboeck Codes --- p.18Chapter 2.2.4 --- Set Partitioning --- p.19Chapter 2.2.5 --- Decoding for TCM --- p.22Chapter 3 --- Trellis-Coded Quantization --- p.26Chapter 3.1 --- Scalar Trellis-Coded Quantization --- p.26Chapter 3.2 --- Trellis-Coded Vector Quantization --- p.31Chapter 3.2.1 --- Set Partitioning in TCVQ --- p.33Chapter 3.2.2 --- Codebook Optimization --- p.34Chapter 3.2.3 --- Numerical Data and Discussions --- p.35Chapter 4 --- Trellis-Coded Quantization with Unequal Distortion --- p.38Chapter 4.1 --- Design Procedures --- p.40Chapter 4.2 --- Fine and Coarse Codebooks --- p.41Chapter 4.3 --- Set Partitioning --- p.44Chapter 4.4 --- Codebook Optimization --- p.45Chapter 4.5 --- Decoding for Unequal Distortion TCVQ --- p.46Chapter 5 --- Unequal Distortion TCVQ on Memoryless Gaussian Source --- p.47Chapter 5.1 --- Memoryless Gaussian Source --- p.49Chapter 5.2 --- Set Partitioning of Codewords of Memoryless Gaussian Source --- p.49Chapter 5.3 --- Numerical Results and Discussions --- p.51Chapter 6 --- Unequal Distortion TCVQ on Markov Gaussian Source --- p.57Chapter 6.1 --- Markov Gaussian Source --- p.57Chapter 6.2 --- Set Partitioning of Codewords of Markov Gaussian Source --- p.58Chapter 6.3 --- Numerical Results and Discussions --- p.59Chapter 7 --- Conclusions --- p.70Bibliography --- p.7

    Higher order conditional entropy-constrained trellis-coded RVQ withapplication to pyramid image coding

    Get PDF
    This paper introduces an extension of conditional entropy-constrained residual vector quantization (CEC-RVQ) to include quantization cell shape gain. The method is referred to as conditional entropy-constrained trellis-coded RVQ (CEC-TCRVQ). The new design is based on coding image vectors by taking into account their 2D correlation and employing a higher order entropy model with a trellis structure. We employed CEC-TCRVQ to code image subbands at low bit rate. The CEC-TCRVQ coded images do well in term of preserving low-magnitude textures present in some image

    Near-capacity dirty-paper code design : a source-channel coding approach

    Get PDF
    This paper examines near-capacity dirty-paper code designs based on source-channel coding. We first point out that the performance loss in signal-to-noise ratio (SNR) in our code designs can be broken into the sum of the packing loss from channel coding and a modulo loss, which is a function of the granular loss from source coding and the target dirty-paper coding rate (or SNR). We then examine practical designs by combining trellis-coded quantization (TCQ) with both systematic and nonsystematic irregular repeat-accumulate (IRA) codes. Like previous approaches, we exploit the extrinsic information transfer (EXIT) chart technique for capacity-approaching IRA code design; but unlike previous approaches, we emphasize the role of strong source coding to achieve as much granular gain as possible using TCQ. Instead of systematic doping, we employ two relatively shifted TCQ codebooks, where the shift is optimized (via tuning the EXIT charts) to facilitate the IRA code design. Our designs synergistically combine TCQ with IRA codes so that they work together as well as they do individually. By bringing together TCQ (the best quantizer from the source coding community) and EXIT chart-based IRA code designs (the best from the channel coding community), we are able to approach the theoretical limit of dirty-paper coding. For example, at 0.25 bit per symbol (b/s), our best code design (with 2048-state TCQ) performs only 0.630 dB away from the Shannon capacity

    Deep Multiple Description Coding by Learning Scalar Quantization

    Full text link
    In this paper, we propose a deep multiple description coding framework, whose quantizers are adaptively learned via the minimization of multiple description compressive loss. Firstly, our framework is built upon auto-encoder networks, which have multiple description multi-scale dilated encoder network and multiple description decoder networks. Secondly, two entropy estimation networks are learned to estimate the informative amounts of the quantized tensors, which can further supervise the learning of multiple description encoder network to represent the input image delicately. Thirdly, a pair of scalar quantizers accompanied by two importance-indicator maps is automatically learned in an end-to-end self-supervised way. Finally, multiple description structural dissimilarity distance loss is imposed on multiple description decoded images in pixel domain for diversified multiple description generations rather than on feature tensors in feature domain, in addition to multiple description reconstruction loss. Through testing on two commonly used datasets, it is verified that our method is beyond several state-of-the-art multiple description coding approaches in terms of coding efficiency.Comment: 8 pages, 4 figures. (DCC 2019: Data Compression Conference). Testing datasets for "Deep Optimized Multiple Description Image Coding via Scalar Quantization Learning" can be found in the website of https://github.com/mdcnn/Deep-Multiple-Description-Codin

    Iterative Equalization and Source Decoding for Vector Quantized Sources

    No full text
    In this contribution an iterative (turbo) channel equalization and source decoding scheme is considered. In our investigations the source is modelled as a Gaussian-Markov source, which is compressed with the aid of vector quantization. The communications channel is modelled as a time-invariant channel contaminated by intersymbol interference (ISI). Since the ISI channel can be viewed as a rate-1 encoder and since the redundancy of the source cannot be perfectly removed by source encoding, a joint channel equalization and source decoding scheme may be employed for enhancing the achievable performance. In our study the channel equalization and the source decoding are operated iteratively on a bit-by-bit basis under the maximum aposteriori (MAP) criterion. The channel equalizer accepts the a priori information provided by the source decoding and also extracts extrinsic information, which in turn acts as a priori information for improving the source decoding performance. Simulation results are presented for characterizing the achievable performance of the iterative channel equalization and source decoding scheme. Our results show that iterative channel equalization and source decoding is capable of achieving an improved performance by efficiently exploiting the residual redundancy of the vector quantization assisted source coding
    corecore