17,509 research outputs found

    Efficient Universal Noiseless Source Codes

    Get PDF
    Although the existence of universal noiseless variable-rate codes for the class of discrete stationary ergodic sources has previously been established, very few practical universal encoding methods are available. Efficient implementable universal source coding techniques are discussed in this paper. Results are presented on source codes for which a small value of the maximum redundancy is achieved with a relatively short block length. A constructive proof of the existence of universal noiseless codes for discrete stationary sources is first presented. The proof is shown to provide a method for obtaining efficient universal noiseless variable-rate codes for various classes of sources. For memoryless sources, upper and lower bounds are obtained for the minimax redundancy as a function of the block length of the code. Several techniques for constructing universal noiseless source codes for memoryless sources are presented and their redundancies are compared with the bounds. Consideration is given to possible applications to data compression for certain nonstationary sources

    Uniquely decodable multiple access source codes

    Get PDF
    The Slepian-Wolf bound raises interest in lossless code design for multiple access networks. Previous work treats instantaneous codes. We generalize the Sardinas and Patterson test and bound the achievable rate region for uniquely decodable codes. The Kraft inequality is generalised to produce the necessary conditions on the codeword lengths for uniquely decodable-side information source code

    On the rate loss of multiple description source codes

    Get PDF
    The rate loss of a multiresolution source code (MRSC) describes the difference between the rate needed to achieve distortion D/sub i/ in resolution i and the rate-distortion function R(D/sub i/). This paper generalizes the rate loss definition to multiple description source codes (MDSCs) and bounds the MDSC rate loss for arbitrary memoryless sources. For a two-description MDSC (2DSC), the rate loss of description i with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), i=1,2, where R/sub i/ is the rate of the ith description; the joint rate loss associated with decoding the two descriptions together to achieve central distortion D/sub 0/ is measured either as L/sub 0/=R/sub 1/+R/sub 2/-R(D/sub 0/) or as L/sub 12/=L/sub 1/+L/sub 2/. We show that for any memoryless source with variance /spl sigma//sup 2/, there exists a 2DSC for that source with L/sub 1//spl les/1/2 or L/sub 2//spl les/1/2 and a) L/sub 0//spl les/1 if D/sub 0//spl les/D/sub 1/+D/sub 2/-/spl sigma//sup 2/, b) L/sub 12//spl les/1 if 1/D/sub 0//spl les/1/D/sub 1/+1/D/sub 2/-1//spl sigma//sup 2/, c) L/sub 0//spl les/L/sub G0/+1.5 and L/sub 12//spl les/L/sub G12/+1 otherwise, where L/sub G0/ and L/sub G12/ are the joint rate losses of a Gaussian source with variance /spl sigma//sup 2/

    Iterative joint design of source codes and multiresolution channel codes

    Get PDF
    We propose an iterative design algorithm for jointly optimizing source and channel codes. The joint design combines channel-optimized vector quantization (COVQ) for the source code with rate-compatible punctured convolutional (RCPC) coding for the channel code. Our objective is to minimize the average end-to-end distortion. For a given channel SNR and transmission rate, our joint source and channel code design achieves an optimal allocation of bits between the source and channel coders. This optimal allocation can reduce distortion by up to 6 dB over suboptimal allocations for the source data set considered. We also compare the distortion of our joint iterative design with that of two suboptimal design techniques: COVQ optimized for a given channel bit-error-probability, and RCPC channel coding optimized for a given vector quantizer. We conclude by relaxing the fixed transmission rate constraint and jointly optimizing the transmission rate, source code, and channel code

    Improved bounds for the rate loss of multiresolution source codes

    Get PDF
    We present new bounds for the rate loss of multiresolution source codes (MRSCs). Considering an M-resolution code, the rate loss at the ith resolution with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), where R/sub i/ is the rate achievable by the MRSC at stage i. This rate loss describes the performance degradation of the MRSC compared to the best single-resolution code with the same distortion. For two-resolution source codes, there are three scenarios of particular interest: (i) when both resolutions are equally important; (ii) when the rate loss at the first resolution is 0 (L/sub 1/=0); (iii) when the rate loss at the second resolution is 0 (L/sub 2/=0). The work of Lastras and Berger (see ibid., vol.47, p.918-26, Mar. 2001) gives constant upper bounds for the rate loss of an arbitrary memoryless source in scenarios (i) and (ii) and an asymptotic bound for scenario (iii) as D/sub 2/ approaches 0. We focus on the squared error distortion measure and (a) prove that for scenario (iii) L/sub 1/<1.1610 for all D/sub 2/<0.7250; (c) tighten the Lastras-Berger bound for scenario (i) from L/sub i//spl les/1/2 to L/sub i/<0.3802, i/spl isin/{1,2}; and (d) generalize the bounds for scenarios (ii) and (iii) to M-resolution codes with M/spl ges/2. We also present upper bounds for the rate losses of additive MRSCs (AMRSCs). An AMRSC is a special MRSC where each resolution describes an incremental reproduction and the kth-resolution reconstruction equals the sum of the first k incremental reproductions. We obtain two bounds on the rate loss of AMRSCs: one primarily good for low-rate coding and another which depends on the source entropy

    Concentric Permutation Source Codes

    Full text link
    Permutation codes are a class of structured vector quantizers with a computationally-simple encoding procedure based on sorting the scalar components. Using a codebook comprising several permutation codes as subcodes preserves the simplicity of encoding while increasing the number of rate-distortion operating points, improving the convex hull of operating points, and increasing design complexity. We show that when the subcodes are designed with the same composition, optimization of the codebook reduces to a lower-dimensional vector quantizer design within a single cone. Heuristics for reducing design complexity are presented, including an optimization of the rate allocation in a shape-gain vector quantizer with gain-dependent wrapped spherical shape codebook
    corecore