143 research outputs found

    Network vector quantization

    Get PDF
    We present an algorithm for designing locally optimal vector quantizers for general networks. We discuss the algorithm's implementation and compare the performance of the resulting "network vector quantizers" to traditional vector quantizers (VQs) and to rate-distortion (R-D) bounds where available. While some special cases of network codes (e.g., multiresolution (MR) and multiple description (MD) codes) have been studied in the literature, we here present a unifying approach that both includes these existing solutions as special cases and provides solutions to previously unsolved examples

    Deep Multiple Description Coding by Learning Scalar Quantization

    Full text link
    In this paper, we propose a deep multiple description coding framework, whose quantizers are adaptively learned via the minimization of multiple description compressive loss. Firstly, our framework is built upon auto-encoder networks, which have multiple description multi-scale dilated encoder network and multiple description decoder networks. Secondly, two entropy estimation networks are learned to estimate the informative amounts of the quantized tensors, which can further supervise the learning of multiple description encoder network to represent the input image delicately. Thirdly, a pair of scalar quantizers accompanied by two importance-indicator maps is automatically learned in an end-to-end self-supervised way. Finally, multiple description structural dissimilarity distance loss is imposed on multiple description decoded images in pixel domain for diversified multiple description generations rather than on feature tensors in feature domain, in addition to multiple description reconstruction loss. Through testing on two commonly used datasets, it is verified that our method is beyond several state-of-the-art multiple description coding approaches in terms of coding efficiency.Comment: 8 pages, 4 figures. (DCC 2019: Data Compression Conference). Testing datasets for "Deep Optimized Multiple Description Image Coding via Scalar Quantization Learning" can be found in the website of https://github.com/mdcnn/Deep-Multiple-Description-Codin

    Multiple-Description Coding by Dithered Delta-Sigma Quantization

    Get PDF
    We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and time-invariant MD coding scheme. We show that the use of a noise shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically as the dimension of the lattice vector quantizer and order of the noise shaping filter approach infinity, the entropy rate of the dithered Delta-Sigma quantization scheme approaches the symmetric two-channel MD rate-distortion function for a memoryless Gaussian source and MSE fidelity criterion, at any side-to-central distortion ratio and any resolution. In the optimal scheme, the infinite-order noise shaping filter must be minimum phase and have a piece-wise flat power spectrum with a single jump discontinuity. An important advantage of the proposed design is that it is symmetric in rate and distortion by construction, so the coding rates of the descriptions are identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has been fixed. Accepted for publication in the IEEE Transactions on Information Theor

    Multiple Description Quantization via Gram-Schmidt Orthogonalization

    Full text link
    The multiple description (MD) problem has received considerable attention as a model of information transmission over unreliable channels. A general framework for designing efficient multiple description quantization schemes is proposed in this paper. We provide a systematic treatment of the El Gamal-Cover (EGC) achievable MD rate-distortion region, and show that any point in the EGC region can be achieved via a successive quantization scheme along with quantization splitting. For the quadratic Gaussian case, the proposed scheme has an intrinsic connection with the Gram-Schmidt orthogonalization, which implies that the whole Gaussian MD rate-distortion region is achievable with a sequential dithered lattice-based quantization scheme as the dimension of the (optimal) lattice quantizers becomes large. Moreover, this scheme is shown to be universal for all i.i.d. smooth sources with performance no worse than that for an i.i.d. Gaussian source with the same variance and asymptotically optimal at high resolution. A class of low-complexity MD scalar quantizers in the proposed general framework also is constructed and is illustrated geometrically; the performance is analyzed in the high resolution regime, which exhibits a noticeable improvement over the existing MD scalar quantization schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor

    Zero-Delay Multiple Descriptions of Stationary Scalar Gauss-Markov Sources

    Get PDF
    In this paper, we introduce the zero-delay multiple-description problem, where an encoder constructs two descriptions and the decoders receive a subset of these descriptions. The encoder and decoders are causal and operate under the restriction of zero delay, which implies that at each time instance, the encoder must generate codewords that can be decoded by the decoders using only the current and past codewords. For the case of discrete-time stationary scalar Gauss—Markov sources and quadratic distortion constraints, we present information-theoretic lower bounds on the average sum-rate in terms of the directed and mutual information rate between the source and the decoder reproductions. Furthermore, we show that the optimum test channel is in this case Gaussian, and it can be realized by a feedback coding scheme that utilizes prediction and correlated Gaussian noises. Operational achievable results are considered in the high-rate scenario using a simple differential pulse code modulation scheme with staggered quantizers. Using this scheme, we achieve operational rates within 0.415 bits / sample / description of the theoretical lower bounds for varying description rates

    Functional quantization

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 119-121).Data is rarely obtained for its own sake; oftentimes, it is a function of the data that we care about. Traditional data compression and quantization techniques, designed to recreate or approximate the data itself, gloss over this point. Are performance gains possible if source coding accounts for the user's function? How about when the encoders cannot themselves compute the function? We introduce the notion of functional quantization and use the tools of high-resolution analysis to get to the bottom of this question. Specifically, we consider real-valued raw data Xn/1 and scalar quantization of each component Xi of this data. First, under the constraints of fixed-rate quantization and variable-rate quantization, we obtain asymptotically optimal quantizer point densities and bit allocations. Introducing the notions of functional typicality and functional entropy, we then obtain asymptotically optimal block quantization schemes for each component. Next, we address the issue of non-monotonic functions by developing a model for high-resolution non-regular quantization. When these results are applied to several examples we observe striking improvements in performance.Finally, we answer three questions by means of the functional quantization framework: (1) Is there any benefit to allowing encoders to communicate with one another? (2) If transform coding is to be performed, how does a functional distortion measure influence the optimal transform? (3) What is the rate loss associated with a suboptimal quantizer design? In the process, we demonstrate how functional quantization can be a useful and intuitive alternative to more general information-theoretic techniques.by Vinith Misra.M.Eng

    Transmission of vector quantization over a frequency-selective Rayleigh fading CDMA channel

    Get PDF
    Recently, the transmission of vector quantization (VQ) over a code-division multiple access (CDMA) channel has received a considerable attention in research community. The complexity of the optimal decoding for VQ in CDMA communications is prohibitive for implementation, especially for systems with a medium or large number of users. A suboptimal approach to VQ decoding over a CDMA channel, disturbed by additive white Gaussian noise (AWGN), was recently developed. Such a suboptimal decoder is built from a soft-output multiuser detector (MUD), a soft bit estimator and the optimal soft VQ decoders of individual users. Due to its lower complexity and good performance, such a decoding scheme is an attractive alternative to the complicated optimal decoder. It is necessary to extend this decoding scheme for a frequency-selective Rayleigh fading CDMA channel, a channel model typically seen in mobile wireless communications. This is precisely the objective of this thesis. Furthermore, the suboptimal decoders are obtained not only for binary phase shift keying (BPSK), but also for M-ary pulse amplitude modulation (M-PAM). This extension offers a flexible trade-off between spectrum efficiency and performance of the systems. In addition, two algorithms based on distance measure and reliability processing are introduced as other alternatives to the suboptimal decoder. Simulation results indicate that the suboptimal decoders studied in this thesis also performs very well over a frequency-selective Rayleigh fading CDMA channel
    corecore