192 research outputs found

    Fast quantizing and decoding and algorithms for lattice quantizers and codes

    Full text link

    Better Lattice Quantizers Constructed from Complex Integers

    Full text link
    Real-valued lattices and complex-valued lattices are mutually convertible, thus we can take advantages of algebraic integers to defined good lattice quantizers in the real-valued domain. In this paper, we adopt complex integers to define generalized checkerboard lattices, especially Em\mathcal{E}_{m} and Em+\mathcal{E}_{m}^+ defined by Eisenstein integers. Using Em+\mathcal{E}_{m}^+, we report the best lattice quantizers in dimensions 1414, 1818, 2020, and 2222. Their product lattices with integers Z\mathbb{Z} also yield better quantizers in dimensions 1515, 1919, 2121, and 2323. The Conway-Sloane type fast decoding algorithms for Em\mathcal{E}_{m} and Em+\mathcal{E}_{m}^+ are given.Comment: 7 page

    Multiple Description Vector Quantization with Lattice Codebooks: Design and Analysis

    Get PDF
    The problem of designing a multiple description vector quantizer with lattice codebook Lambda is considered. A general solution is given to a labeling problem which plays a crucial role in the design of such quantizers. Numerical performance results are obtained for quantizers based on the lattices A_2 and Z^i, i=1,2,4,8, that make use of this labeling algorithm. The high-rate squared-error distortions for this family of L-dimensional vector quantizers are then analyzed for a memoryless source with probability density function p and differential entropy h(p) < infty. For any a in (0,1) and rate pair (R,R), it is shown that the two-channel distortion d_0 and the channel 1 (or channel 2) distortions d_s satisfy lim_{R -> infty} d_0 2^(2R(1+a)) = (1/4) G(Lambda) 2^{2h(p)} and lim_{R -> infty} d_s 2^(2R(1-a)) = G(S_L) 2^2h(p), where G(Lambda) is the normalized second moment of a Voronoi cell of the lattice Lambda and G(S_L) is the normalized second moment of a sphere in L dimensions.Comment: 46 pages, 14 figure

    Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding

    Full text link
    We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. It is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to trade off distortion performance with encoding complexity. An example of such a trade-off as a function of the block length n is the following. With computational resource (space or time) per source sample of O((n/\log n)^2), for a fixed distortion-level above the Gaussian distortion-rate function, the probability of excess distortion decays exponentially in n. The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has good empirical performance, especially at low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor

    Designing Voronoi Constellations to Minimize Bit Error Rate

    Get PDF
    In a classical 1983 paper, Conway and Sloane presented fast encoding and decoding algorithms for a special case of Voronoi constellations (VCs), for which the shaping lattice is a scaled copy of the coding lattice. Feng generalized their encoding and decoding methods to arbitrary VCs. Less general algorithms were also proposed by Kurkoski and Ferdinand, respectively, for VCs with some constraints on their coding and shaping lattices. In this work, we design VCs with a cubic coding lattice based on Kurkoski\u27s encoding and decoding algorithms. The designed VCs achieve up to 1.03 dB shaping gains with a lower complexity than Conway and Sloane\u27s scaled VCs. To minimize the bit error rate (BER), pseudo-Gray labeling of constellation points is applied. In uncoded systems, the designed VCs reduce the required SNR by up to 1.1 dB at the same BER, compared with the same VCs using Feng\u27s and Ferdinand\u27s algorithms. In coded systems, the designed VCs are able to achieve lower BER than the scaled VCs at the same SNR. In addition, a Gray penalty estimation method for such VCs of very large size is introduced
    corecore