22 research outputs found

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes

    Trellis-Coded Non-Orthogonal Multiple Access

    Get PDF
    In this letter, we propose a trellis-coded non-orthogonal multiple access (NOMA) scheme. The signals for different users are produced by trellis coded modulation (TCM) and then superimposed on different power levels. By interpreting the encoding process via the tensor product of trellises, we introduce a joint detection method based on the Viterbi algorithm. Then, we determine the optimal power allocation between the two users by maximizing the free distance of the tensor product trellis. Finally, we manifest that the trellis-coded NOMA outperforms the uncoded NOMA at high signal-to-noise ratio (SNR)

    Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding

    Full text link
    We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. It is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to trade off distortion performance with encoding complexity. An example of such a trade-off as a function of the block length n is the following. With computational resource (space or time) per source sample of O((n/\log n)^2), for a fixed distortion-level above the Gaussian distortion-rate function, the probability of excess distortion decays exponentially in n. The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has good empirical performance, especially at low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor

    General embedded quantization for wavelet-based lossy image coding

    Get PDF
    Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems

    2-step scalar deadzone quantization for bitplane image coding

    Get PDF
    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000

    Trellis-coded quantization with unequal distortion.

    Get PDF
    Kwong Cheuk Fai.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 72-74).Abstracts in English and Chinese.Acknowledgements --- p.iAbstract --- p.iiTable of Contents --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Quantization --- p.2Chapter 1.2 --- Trellis-Coded Quantization --- p.3Chapter 1.3 --- Thesis Organization --- p.4Chapter 2 --- Trellis-Coded Modulation --- p.6Chapter 2.1 --- Convolutional Codes --- p.7Chapter 2.1.1 --- Generator Polynomials and Generator Matrix --- p.9Chapter 2.1.2 --- Circuit Diagram --- p.10Chapter 2.1.3 --- State Transition Diagram --- p.11Chapter 2.1.4 --- Trellis Diagram --- p.12Chapter 2.2 --- Trellis-Coded Modulation --- p.13Chapter 2.2.1 --- Uncoded Transmission verses TCM --- p.14Chapter 2.2.2 --- Trellis Representation --- p.17Chapter 2.2.3 --- Ungerboeck Codes --- p.18Chapter 2.2.4 --- Set Partitioning --- p.19Chapter 2.2.5 --- Decoding for TCM --- p.22Chapter 3 --- Trellis-Coded Quantization --- p.26Chapter 3.1 --- Scalar Trellis-Coded Quantization --- p.26Chapter 3.2 --- Trellis-Coded Vector Quantization --- p.31Chapter 3.2.1 --- Set Partitioning in TCVQ --- p.33Chapter 3.2.2 --- Codebook Optimization --- p.34Chapter 3.2.3 --- Numerical Data and Discussions --- p.35Chapter 4 --- Trellis-Coded Quantization with Unequal Distortion --- p.38Chapter 4.1 --- Design Procedures --- p.40Chapter 4.2 --- Fine and Coarse Codebooks --- p.41Chapter 4.3 --- Set Partitioning --- p.44Chapter 4.4 --- Codebook Optimization --- p.45Chapter 4.5 --- Decoding for Unequal Distortion TCVQ --- p.46Chapter 5 --- Unequal Distortion TCVQ on Memoryless Gaussian Source --- p.47Chapter 5.1 --- Memoryless Gaussian Source --- p.49Chapter 5.2 --- Set Partitioning of Codewords of Memoryless Gaussian Source --- p.49Chapter 5.3 --- Numerical Results and Discussions --- p.51Chapter 6 --- Unequal Distortion TCVQ on Markov Gaussian Source --- p.57Chapter 6.1 --- Markov Gaussian Source --- p.57Chapter 6.2 --- Set Partitioning of Codewords of Markov Gaussian Source --- p.58Chapter 6.3 --- Numerical Results and Discussions --- p.59Chapter 7 --- Conclusions --- p.70Bibliography --- p.7
    corecore