663 research outputs found

    Data Processing Bounds for Scalar Lossy Source Codes with Side Information at the Decoder

    Full text link
    In this paper, we introduce new lower bounds on the distortion of scalar fixed-rate codes for lossy compression with side information available at the receiver. These bounds are derived by presenting the relevant random variables as a Markov chain and applying generalized data processing inequalities a la Ziv and Zakai. We show that by replacing the logarithmic function with other functions, in the data processing theorem we formulate, we obtain new lower bounds on the distortion of scalar coding with side information at the decoder. The usefulness of these results is demonstrated for uniform sources and the convex function Q(t)=t1αQ(t)=t^{1-\alpha}, α>1\alpha>1. The bounds in this case are shown to be better than one can obtain from the Wyner-Ziv rate-distortion function.Comment: 35 pages, 9 figure

    Information Nonanticipative Rate Distortion Function and Its Applications

    Full text link
    This paper investigates applications of nonanticipative Rate Distortion Function (RDF) in a) zero-delay Joint Source-Channel Coding (JSCC) design based on average and excess distortion probability, b) in bounding the Optimal Performance Theoretically Attainable (OPTA) by noncausal and causal codes, and computing the Rate Loss (RL) of zero-delay and causal codes with respect to noncausal codes. These applications are described using two running examples, the Binary Symmetric Markov Source with parameter p, (BSMS(p)) and the multidimensional partially observed Gaussian-Markov source. For the multidimensional Gaussian-Markov source with square error distortion, the solution of the nonanticipative RDF is derived, its operational meaning using JSCC design via a noisy coding theorem is shown by providing the optimal encoding-decoding scheme over a vector Gaussian channel, and the RL of causal and zero-delay codes with respect to noncausal codes is computed. For the BSMS(p) with Hamming distortion, the solution of the nonanticipative RDF is derived, the RL of causal codes with respect to noncausal codes is computed, and an uncoded noisy coding theorem based on excess distortion probability is shown. The information nonanticipative RDF is shown to be equivalent to the nonanticipatory epsilon-entropy, which corresponds to the classical RDF with an additional causality or nonanticipative condition imposed on the optimal reproduction conditional distribution.Comment: 34 pages, 12 figures, part of this paper was accepted for publication in IEEE International Symposium on Information Theory (ISIT), 2014 and in book Coordination Control of Distributed Systems of series Lecture Notes in Control and Information Sciences, 201

    Neural Distributed Compressor Discovers Binning

    Full text link
    We consider lossy compression of an information source when the decoder has lossless access to a correlated one. This setup, also known as the Wyner-Ziv problem, is a special case of distributed source coding. To this day, practical approaches for the Wyner-Ziv problem have neither been fully developed nor heavily investigated. We propose a data-driven method based on machine learning that leverages the universal function approximation capability of artificial neural networks. We find that our neural network-based compression scheme, based on variational vector quantization, recovers some principles of the optimum theoretical solution of the Wyner-Ziv setup, such as binning in the source space as well as optimal combination of the quantization index and side information, for exemplary sources. These behaviors emerge although no structure exploiting knowledge of the source distributions was imposed. Binning is a widely used tool in information theoretic proofs and methods, and to our knowledge, this is the first time it has been explicitly observed to emerge from data-driven learning.Comment: draft of a journal version of our previous ISIT 2023 paper (available at: arXiv:2305.04380). arXiv admin note: substantial text overlap with arXiv:2305.0438

    Network vector quantization

    Get PDF
    We present an algorithm for designing locally optimal vector quantizers for general networks. We discuss the algorithm's implementation and compare the performance of the resulting "network vector quantizers" to traditional vector quantizers (VQs) and to rate-distortion (R-D) bounds where available. While some special cases of network codes (e.g., multiresolution (MR) and multiple description (MD) codes) have been studied in the literature, we here present a unifying approach that both includes these existing solutions as special cases and provides solutions to previously unsolved examples

    Nested turbo codes for the costa problem

    Get PDF
    Driven by applications in data-hiding, MIMO broadcast channel coding, precoding for interference cancellation, and transmitter cooperation in wireless networks, Costa coding has lately become a very active research area. In this paper, we first offer code design guidelines in terms of source- channel coding for algebraic binning. We then address practical code design based on nested lattice codes and propose nested turbo codes using turbo-like trellis-coded quantization (TCQ) for source coding and turbo trellis-coded modulation (TTCM) for channel coding. Compared to TCQ, turbo-like TCQ offers structural similarity between the source and channel coding components, leading to more efficient nesting with TTCM and better source coding performance. Due to the difference in effective dimensionality between turbo-like TCQ and TTCM, there is a performance tradeoff between these two components when they are nested together, meaning that the performance of turbo-like TCQ worsens as the TTCM code becomes stronger and vice versa. Optimization of this performance tradeoff leads to our code design that outperforms existing TCQ/TCM and TCQ/TTCM constructions and exhibits a gap of 0.94, 1.42 and 2.65 dB to the Costa capacity at 2.0, 1.0, and 0.5 bits/sample, respectively

    Lossy joint source-channel coding in the finite blocklength regime

    Get PDF
    This paper finds new tight finite-blocklength bounds for the best achievable lossy joint source-channel code rate, and demonstrates that joint source-channel code design brings considerable performance advantage over a separate one in the non-asymptotic regime. A joint source-channel code maps a block of kk source symbols onto a lengthn-n channel codeword, and the fidelity of reproduction at the receiver end is measured by the probability ϵ\epsilon that the distortion exceeds a given threshold dd. For memoryless sources and channels, it is demonstrated that the parameters of the best joint source-channel code must satisfy nCkR(d)nV+kV(d)Q(ϵ)nC - kR(d) \approx \sqrt{nV + k \mathcal V(d)} Q(\epsilon), where CC and VV are the channel capacity and channel dispersion, respectively; R(d)R(d) and V(d)\mathcal V(d) are the source rate-distortion and rate-dispersion functions; and QQ is the standard Gaussian complementary cdf. Symbol-by-symbol (uncoded) transmission is known to achieve the Shannon limit when the source and channel satisfy a certain probabilistic matching condition. In this paper we show that even when this condition is not satisfied, symbol-by-symbol transmission is, in some cases, the best known strategy in the non-asymptotic regime

    Improved Modeling of the Correlation Between Continuous-Valued Sources in LDPC-Based DSC

    Full text link
    Accurate modeling of the correlation between the sources plays a crucial role in the efficiency of distributed source coding (DSC) systems. This correlation is commonly modeled in the binary domain by using a single binary symmetric channel (BSC), both for binary and continuous-valued sources. We show that "one" BSC cannot accurately capture the correlation between continuous-valued sources; a more accurate model requires "multiple" BSCs, as many as the number of bits used to represent each sample. We incorporate this new model into the DSC system that uses low-density parity-check (LDPC) codes for compression. The standard Slepian-Wolf LDPC decoder requires a slight modification so that the parameters of all BSCs are integrated in the log-likelihood ratios (LLRs). Further, using an interleaver the data belonging to different bit-planes are shuffled to introduce randomness in the binary domain. The new system has the same complexity and delay as the standard one. Simulation results prove the effectiveness of the proposed model and system.Comment: 5 Pages, 4 figures; presented at the Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, November 201
    corecore