310 research outputs found

    Syndrome source coding and its universal generalization

    Get PDF
    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles

    Vector quantization

    Get PDF
    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts

    One-shot lossy quantum data compression

    Get PDF
    We provide a framework for one-shot quantum rate distortion coding, in which the goal is to determine the minimum number of qubits required to compress quantum information as a function of the probability that the distortion incurred upon decompression exceeds some specified level. We obtain a one-shot characterization of the minimum qubit compression size for an entanglement-assisted quantum rate-distortion code in terms of the smooth max-information, a quantity previously employed in the one-shot quantum reverse Shannon theorem. Next, we show how this characterization converges to the known expression for the entanglement-assisted quantum rate distortion function for asymptotically many copies of a memoryless quantum information source. Finally, we give a tight, finite blocklength characterization for the entanglement-assisted minimum qubit compression size of a memoryless isotropic qubit source subject to an average symbol-wise distortion constraint.Comment: 36 page

    State–of–the–art report on nonlinear representation of sources and channels

    Get PDF
    This report consists of two complementary parts, related to the modeling of two important sources of nonlinearities in a communications system. In the first part, an overview of important past work related to the estimation, compression and processing of sparse data through the use of nonlinear models is provided. In the second part, the current state of the art on the representation of wireless channels in the presence of nonlinearities is summarized. In addition to the characteristics of the nonlinear wireless fading channel, some information is also provided on recent approaches to the sparse representation of such channels

    The Reliability Function of Lossy Source-Channel Coding of Variable-Length Codes with Feedback

    Full text link
    We consider transmission of discrete memoryless sources (DMSes) across discrete memoryless channels (DMCs) using variable-length lossy source-channel codes with feedback. The reliability function (optimum error exponent) is shown to be equal to max{0,B(1R(D)/C)},\max\{0, B(1-R(D)/C)\}, where R(D)R(D) is the rate-distortion function of the source, BB is the maximum relative entropy between output distributions of the DMC, and CC is the Shannon capacity of the channel. We show that, in this setting and in this asymptotic regime, separate source-channel coding is, in fact, optimal.Comment: Accepted to IEEE Transactions on Information Theory in Apr. 201
    corecore