2,594 research outputs found

    The price of ignorance: The impact of side-information on delay for lossless source-coding

    Full text link
    Inspired by the context of compressing encrypted sources, this paper considers the general tradeoff between rate, end-to-end delay, and probability of error for lossless source coding with side-information. The notion of end-to-end delay is made precise by considering a sequential setting in which source symbols are revealed in real time and need to be reconstructed at the decoder within a certain fixed latency requirement. Upper bounds are derived on the reliability functions with delay when side-information is known only to the decoder as well as when it is also known at the encoder. When the encoder is not ignorant of the side-information (including the trivial case when there is no side-information), it is possible to have substantially better tradeoffs between delay and probability of error at all rates. This shows that there is a fundamental price of ignorance in terms of end-to-end delay when the encoder is not aware of the side information. This effect is not visible if only fixed-block-length codes are considered. In this way, side-information in source-coding plays a role analogous to that of feedback in channel coding. While the theorems in this paper are asymptotic in terms of long delays and low probabilities of error, an example is used to show that the qualitative effects described here are significant even at short and moderate delays.Comment: 25 pages, 17 figures. Submitted to the IEEE Transactions on Information Theor

    Improved Source Coding Exponents via Witsenhausen's Rate

    Full text link
    We provide a novel upper-bound on Witsenhausen's rate, the rate required in the zero-error analogue of the Slepian-Wolf problem; our bound is given in terms of a new information-theoretic functional defined on a certain graph. We then use the functional to give a single letter lower-bound on the error exponent for the Slepian-Wolf problem under the vanishing error probability criterion, where the decoder has full (i.e. unencoded) side information. Our exponent stems from our new encoding scheme which makes use of source distribution only through the positions of the zeros in the `channel' matrix connecting the source with the side information, and in this sense is `semi-universal'. We demonstrate that our error exponent can beat the `expurgated' source-coding exponent of Csisz\'{a}r and K\"{o}rner, achievability of which requires the use of a non-universal maximum-likelihood decoder. An extension of our scheme to the lossy case (i.e. Wyner-Ziv) is given. For the case when the side information is a deterministic function of the source, the exponent of our improved scheme agrees with the sphere-packing bound exactly (thus determining the reliability function). An application of our functional to zero-error channel capacity is also given.Comment: 24 pages, 4 figures. Submitted to IEEE Trans. Info. Theory (Jan 2010

    Capacity and Random-Coding Exponents for Channel Coding with Side Information

    Full text link
    Capacity formulas and random-coding exponents are derived for a generalized family of Gel'fand-Pinsker coding problems. These exponents yield asymptotic upper bounds on the achievable log probability of error. In our model, information is to be reliably transmitted through a noisy channel with finite input and output alphabets and random state sequence, and the channel is selected by a hypothetical adversary. Partial information about the state sequence is available to the encoder, adversary, and decoder. The design of the transmitter is subject to a cost constraint. Two families of channels are considered: 1) compound discrete memoryless channels (CDMC), and 2) channels with arbitrary memory, subject to an additive cost constraint, or more generally to a hard constraint on the conditional type of the channel output given the input. Both problems are closely connected. The random-coding exponent is achieved using a stacked binning scheme and a maximum penalized mutual information decoder, which may be thought of as an empirical generalized Maximum a Posteriori decoder. For channels with arbitrary memory, the random-coding exponents are larger than their CDMC counterparts. Applications of this study include watermarking, data hiding, communication in presence of partially known interferers, and problems such as broadcast channels, all of which involve the fundamental idea of binning.Comment: to appear in IEEE Transactions on Information Theory, without Appendices G and

    Interactive Schemes for the AWGN Channel with Noisy Feedback

    Full text link
    We study the problem of communication over an additive white Gaussian noise (AWGN) channel with an AWGN feedback channel. When the feedback channel is noiseless, the classic Schalkwijk-Kailath (S-K) scheme is known to achieve capacity in a simple sequential fashion, while attaining reliability superior to non-feedback schemes. In this work, we show how simplicity and reliability can be attained even when the feedback is noisy, provided that the feedback channel is sufficiently better than the feedforward channel. Specifically, we introduce a low-complexity low-delay interactive scheme that operates close to capacity for a fixed bit error probability (e.g. 10−610^{-6}). We then build on this scheme to provide two asymptotic constructions, one based on high dimensional lattices, and the other based on concatenated coding, that admit an error exponent significantly exceeding the best possible non-feedback exponent. Our approach is based on the interpretation of feedback transmission as a side-information problem, and employs an interactive modulo-lattice solution.Comment: Accepted for publication in the IEEE Transactions on Information Theor

    Random Access and Source-Channel Coding Error Exponents for Multiple Access Channels

    Full text link
    A new universal coding/decoding scheme for random access with collision detection is given in the case of two senders. The result is used to give an achievable joint source-channel coding error exponent for multiple access channels in the case of independent sources. This exponent is improved in a modified model that admits error free 0 rate communication between the senders.Comment: This paper is submitted to IEEE transactions on information theory. It was presented in part at ISIT2013 (IEEE International Symposium on Information Theory, Istanbul

    Channel Detection in Coded Communication

    Full text link
    We consider the problem of block-coded communication, where in each block, the channel law belongs to one of two disjoint sets. The decoder is aimed to decode only messages that have undergone a channel from one of the sets, and thus has to detect the set which contains the prevailing channel. We begin with the simplified case where each of the sets is a singleton. For any given code, we derive the optimum detection/decoding rule in the sense of the best trade-off among the probabilities of decoding error, false alarm, and misdetection, and also introduce sub-optimal detection/decoding rules which are simpler to implement. Then, various achievable bounds on the error exponents are derived, including the exact single-letter characterization of the random coding exponents for the optimal detector/decoder. We then extend the random coding analysis to general sets of channels, and show that there exists a universal detector/decoder which performs asymptotically as well as the optimal detector/decoder, when tuned to detect a channel from a specific pair of channels. The case of a pair of binary symmetric channels is discussed in detail.Comment: Submitted to IEEE Transactions on Information Theor

    List decoding - random coding exponents and expurgated exponents

    Full text link
    Some new results are derived concerning random coding error exponents and expurgated exponents for list decoding with a deterministic list size LL. Two asymptotic regimes are considered, the fixed list-size regime, where LL is fixed independently of the block length nn, and the exponential list-size, where LL grows exponentially with nn. We first derive a general upper bound on the list-decoding average error probability, which is suitable for both regimes. This bound leads to more specific bounds in the two regimes. In the fixed list-size regime, the bound is related to known bounds and we establish its exponential tightness. In the exponential list-size regime, we establish the achievability of the well known sphere packing lower bound. Relations to guessing exponents are also provided. An immediate byproduct of our analysis in both regimes is the universality of the maximum mutual information (MMI) list decoder in the error exponent sense. Finally, we consider expurgated bounds at low rates, both using Gallager's approach and the Csisz\'ar-K\"orner-Marton approach, which is, in general better (at least for L=1L=1). The latter expurgated bound, which involves the notion of {\it multi-information}, is also modified to apply to continuous alphabet channels, and in particular, to the Gaussian memoryless channel, where the expression of the expurgated bound becomes quite explicit.Comment: 28 pages; submitted to the IEEE Trans. on Information Theor

    On Binary Distributed Hypothesis Testing

    Full text link
    We consider the problem of distributed binary hypothesis testing of two sequences that are generated by an i.i.d. doubly-binary symmetric source. Each sequence is observed by a different terminal. The two hypotheses correspond to different levels of correlation between the two source components, i.e., the crossover probability between the two. The terminals communicate with a decision function via rate-limited noiseless links. We analyze the tradeoff between the exponential decay of the two error probabilities associated with the hypothesis test and the communication rates. We first consider the side-information setting where one encoder is allowed to send the full sequence. For this setting, previous work exploits the fact that a decoding error of the source does not necessarily lead to an erroneous decision upon the hypothesis. We provide improved achievability results by carrying out a tighter analysis of the effect of binning error; the results are also more complete as they cover the full exponent tradeoff and all possible correlations. We then turn to the setting of symmetric rates for which we utilize Korner-Marton coding to generalize the results, with little degradation with respect to the performance with a one-sided constraint (side-information setting)

    On the Reliability Function of Distributed Hypothesis Testing Under Optimal Detection

    Full text link
    The distributed hypothesis testing problem with full side-information is studied. The trade-off (reliability function) between the two types of error exponents under limited rate is studied in the following way. First, the problem is reduced to the problem of determining the reliability function of channel codes designed for detection (in analogy to a similar result which connects the reliability function of distributed lossless compression and ordinary channel codes). Second, a single-letter random-coding bound based on a hierarchical ensemble, as well as a single-letter expurgated bound, are derived for the reliability of channel-detection codes. Both bounds are derived for a system which employs the optimal detection rule. We conjecture that the resulting random-coding bound is ensemble-tight, and consequently optimal within the class of quantization-and-binning schemes

    Exact random coding error exponents of optimal bin index decoding

    Full text link
    We consider ensembles of channel codes that are partitioned into bins, and focus on analysis of exact random coding error exponents associated with optimum decoding of the index of the bin to which the transmitted codeword belongs. Two main conclusions arise from this analysis: (i) for independent random selection of codewords within a given type class, the random coding exponent of optimal bin index decoding is given by the ordinary random coding exponent function, computed at the rate of the entire code, independently of the exponential rate of the size of the bin. (ii) for this ensemble of codes, sub-optimal bin index decoding, that is based on ordinary maximum likelihood (ML) decoding, is as good as the optimal bin index decoding in terms of the random coding error exponent achieved. Finally, for the sake of completeness, we also outline how our analysis of exact random coding exponents extends to the hierarchical ensemble that correspond to superposition coding and optimal decoding, where for each bin, first, a cloud center is drawn at random, and then the codewords of this bin are drawn conditionally independently given the cloud center. For this ensemble, conclusions (i) and (ii), mentioned above, no longer hold necessarily in general.Comment: 19 pages; submitted to IEEE Trans. on Information Theor
    • …
    corecore