6,619 research outputs found

    Multiple-Description Coding by Dithered Delta-Sigma Quantization

    Get PDF
    We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and time-invariant MD coding scheme. We show that the use of a noise shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically as the dimension of the lattice vector quantizer and order of the noise shaping filter approach infinity, the entropy rate of the dithered Delta-Sigma quantization scheme approaches the symmetric two-channel MD rate-distortion function for a memoryless Gaussian source and MSE fidelity criterion, at any side-to-central distortion ratio and any resolution. In the optimal scheme, the infinite-order noise shaping filter must be minimum phase and have a piece-wise flat power spectrum with a single jump discontinuity. An important advantage of the proposed design is that it is symmetric in rate and distortion by construction, so the coding rates of the descriptions are identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has been fixed. Accepted for publication in the IEEE Transactions on Information Theor

    n-Channel Asymmetric Entropy-Constrained Multiple-Description Lattice Vector Quantization

    Get PDF
    This paper is about the design and analysis of an index-assignment (IA) based multiple-description coding scheme for the n-channel asymmetric case. We use entropy constrained lattice vector quantization and restrict attention to simple reconstruction functions, which are given by the inverse IA function when all descriptions are received or otherwise by a weighted average of the received descriptions. We consider smooth sources with finite differential entropy rate and MSE fidelity criterion. As in previous designs, our construction is based on nested lattices which are combined through a single IA function. The results are exact under high-resolution conditions and asymptotically as the nesting ratios of the lattices approach infinity. For any n, the design is asymptotically optimal within the class of IA-based schemes. Moreover, in the case of two descriptions and finite lattice vector dimensions greater than one, the performance is strictly better than that of existing designs. In the case of three descriptions, we show that in the limit of large lattice vector dimensions, points on the inner bound of Pradhan et al. can be achieved. Furthermore, for three descriptions and finite lattice vector dimensions, we show that the IA-based approach yields, in the symmetric case, a smaller rate loss than the recently proposed source-splitting approach.Comment: 49 pages, 4 figures. Accepted for publication in IEEE Transactions on Information Theory, 201

    n-Channel Asymmetric Multiple-Description Lattice Vector Quantization

    Full text link
    We present analytical expressions for optimal entropy-constrained multiple-description lattice vector quantizers which, under high-resolutions assumptions, minimize the expected distortion for given packet-loss probabilities. We consider the asymmetric case where packet-loss probabilities and side entropies are allowed to be unequal and find optimal quantizers for any number of descriptions in any dimension. We show that the normalized second moments of the side-quantizers are given by that of an LL-dimensional sphere independent of the choice of lattices. Furthermore, we show that the optimal bit-distribution among the descriptions is not unique. In fact, within certain limits, bits can be arbitrarily distributed.Comment: To appear in the proceedings of the 2005 IEEE International Symposium on Information Theory, Adelaide, Australia, September 4-9, 200

    Integer-Forcing Source Coding

    Full text link
    Integer-Forcing (IF) is a new framework, based on compute-and-forward, for decoding multiple integer linear combinations from the output of a Gaussian multiple-input multiple-output channel. This work applies the IF approach to arrive at a new low-complexity scheme, IF source coding, for distributed lossy compression of correlated Gaussian sources under a minimum mean squared error distortion measure. All encoders use the same nested lattice codebook. Each encoder quantizes its observation using the fine lattice as a quantizer and reduces the result modulo the coarse lattice, which plays the role of binning. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. In general, the linear combinations have smaller average powers than the original signals. This allows to increase the density of the coarse lattice, which in turn translates to smaller compression rates. We also propose and analyze a one-shot version of IF source coding, that is simple enough to potentially lead to a new design principle for analog-to-digital converters that can exploit spatial correlations between the sampled signals.Comment: Submitted to IEEE Transactions on Information Theor

    Multiple Description Quantization via Gram-Schmidt Orthogonalization

    Full text link
    The multiple description (MD) problem has received considerable attention as a model of information transmission over unreliable channels. A general framework for designing efficient multiple description quantization schemes is proposed in this paper. We provide a systematic treatment of the El Gamal-Cover (EGC) achievable MD rate-distortion region, and show that any point in the EGC region can be achieved via a successive quantization scheme along with quantization splitting. For the quadratic Gaussian case, the proposed scheme has an intrinsic connection with the Gram-Schmidt orthogonalization, which implies that the whole Gaussian MD rate-distortion region is achievable with a sequential dithered lattice-based quantization scheme as the dimension of the (optimal) lattice quantizers becomes large. Moreover, this scheme is shown to be universal for all i.i.d. smooth sources with performance no worse than that for an i.i.d. Gaussian source with the same variance and asymptotically optimal at high resolution. A class of low-complexity MD scalar quantizers in the proposed general framework also is constructed and is illustrated geometrically; the performance is analyzed in the high resolution regime, which exhibits a noticeable improvement over the existing MD scalar quantization schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor

    Colored-Gaussian Multiple Descriptions: Spectral and Time-Domain Forms

    Get PDF
    It is well known that Shannon's rate-distortion function (RDF) in the colored quadratic Gaussian (QG) case can be parametrized via a single Lagrangian variable (the "water level" in the reverse water filling solution). In this work, we show that the symmetric colored QG multiple-description (MD) RDF in the case of two descriptions can be parametrized in the spectral domain via two Lagrangian variables, which control the trade-off between the side distortion, the central distortion, and the coding rate. This spectral-domain analysis is complemented by a time-domain scheme-design approach: we show that the symmetric colored QG MD RDF can be achieved by combining ideas of delta-sigma modulation and differential pulse-code modulation. Specifically, two source prediction loops, one for each description, are embedded within a common noise shaping loop, whose parameters are explicitly found from the spectral-domain characterization.Comment: Accepted for publications in the IEEE Transactions on Information Theory. Title have been shortened, abstract clarified, and paper significantly restructure

    Optimal Design of Multiple Description Lattice Vector Quantizers

    Full text link
    In the design of multiple description lattice vector quantizers (MDLVQ), index assignment plays a critical role. In addition, one also needs to choose the Voronoi cell size of the central lattice v, the sublattice index N, and the number of side descriptions K to minimize the expected MDLVQ distortion, given the total entropy rate of all side descriptions Rt and description loss probability p. In this paper we propose a linear-time MDLVQ index assignment algorithm for any K >= 2 balanced descriptions in any dimensions, based on a new construction of so-called K-fraction lattice. The algorithm is greedy in nature but is proven to be asymptotically (N -> infinity) optimal for any K >= 2 balanced descriptions in any dimensions, given Rt and p. The result is stronger when K = 2: the optimality holds for finite N as well, under some mild conditions. For K > 2, a local adjustment algorithm is developed to augment the greedy index assignment, and conjectured to be optimal for finite N. Our algorithmic study also leads to better understanding of v, N and K in optimal MDLVQ design. For K = 2 we derive, for the first time, a non-asymptotical closed form expression of the expected distortion of optimal MDLVQ in p, Rt, N. For K > 2, we tighten the current asymptotic formula of the expected distortion, relating the optimal values of N and K to p and Rt more precisely.Comment: Submitted to IEEE Trans. on Information Theory, Sep 2006 (30 pages, 7 figures

    Deep Multiple Description Coding by Learning Scalar Quantization

    Full text link
    In this paper, we propose a deep multiple description coding framework, whose quantizers are adaptively learned via the minimization of multiple description compressive loss. Firstly, our framework is built upon auto-encoder networks, which have multiple description multi-scale dilated encoder network and multiple description decoder networks. Secondly, two entropy estimation networks are learned to estimate the informative amounts of the quantized tensors, which can further supervise the learning of multiple description encoder network to represent the input image delicately. Thirdly, a pair of scalar quantizers accompanied by two importance-indicator maps is automatically learned in an end-to-end self-supervised way. Finally, multiple description structural dissimilarity distance loss is imposed on multiple description decoded images in pixel domain for diversified multiple description generations rather than on feature tensors in feature domain, in addition to multiple description reconstruction loss. Through testing on two commonly used datasets, it is verified that our method is beyond several state-of-the-art multiple description coding approaches in terms of coding efficiency.Comment: 8 pages, 4 figures. (DCC 2019: Data Compression Conference). Testing datasets for "Deep Optimized Multiple Description Image Coding via Scalar Quantization Learning" can be found in the website of https://github.com/mdcnn/Deep-Multiple-Description-Codin

    Computation Alignment: Capacity Approximation without Noise Accumulation

    Full text link
    Consider several source nodes communicating across a wireless network to a destination node with the help of several layers of relay nodes. Recent work by Avestimehr et al. has approximated the capacity of this network up to an additive gap. The communication scheme achieving this capacity approximation is based on compress-and-forward, resulting in noise accumulation as the messages traverse the network. As a consequence, the approximation gap increases linearly with the network depth. This paper develops a computation alignment strategy that can approach the capacity of a class of layered, time-varying wireless relay networks up to an approximation gap that is independent of the network depth. This strategy is based on the compute-and-forward framework, which enables relays to decode deterministic functions of the transmitted messages. Alone, compute-and-forward is insufficient to approach the capacity as it incurs a penalty for approximating the wireless channel with complex-valued coefficients by a channel with integer coefficients. Here, this penalty is circumvented by carefully matching channel realizations across time slots to create integer-valued effective channels that are well-suited to compute-and-forward. Unlike prior constant gap results, the approximation gap obtained in this paper also depends closely on the fading statistics, which are assumed to be i.i.d. Rayleigh.Comment: 36 pages, to appear in IEEE Transactions on Information Theor
    • …
    corecore