284,601 research outputs found

    The Capacity of Channels with Feedback

    Full text link
    We introduce a general framework for treating channels with memory and feedback. First, we generalize Massey's concept of directed information and use it to characterize the feedback capacity of general channels. Second, we present coding results for Markov channels. This requires determining appropriate sufficient statistics at the encoder and decoder. Third, a dynamic programming framework for computing the capacity of Markov channels is presented. Fourth, it is shown that the average cost optimality equation (ACOE) can be viewed as an implicit single-letter characterization of the capacity. Fifth, scenarios with simple sufficient statistics are described

    Capacity of a POST Channel with and without Feedback

    Full text link
    We consider finite state channels where the state of the channel is its previous output. We refer to these as POST (Previous Output is the STate) channels. We first focus on POST(α\alpha) channels. These channels have binary inputs and outputs, where the state determines if the channel behaves as a ZZ or an SS channel, both with parameter α\alpha. %with parameter α.\alpha. We show that the non feedback capacity of the POST(α\alpha) channel equals its feedback capacity, despite the memory of the channel. The proof of this surprising result is based on showing that the induced output distribution, when maximizing the directed information in the presence of feedback, can also be achieved by an input distribution that does not utilize of the feedback. We show that this is a sufficient condition for the feedback capacity to equal the non feedback capacity for any finite state channel. We show that the result carries over from the POST(α\alpha) channel to a binary POST channel where the previous output determines whether the current channel will be binary with parameters (a,b)(a,b) or (b,a)(b,a). Finally, we show that, in general, feedback may increase the capacity of a POST channel

    On the Capacity of Symmetric Gaussian Interference Channels with Feedback

    Full text link
    In this paper, we propose a new coding scheme for symmetric Gaussian interference channels with feedback based on the ideas of time-varying coding schemes. The proposed scheme improves the Suh-Tse and Kramer inner bounds of the channel capacity for the cases of weak and not very strong interference. This improvement is more significant when the signal-to-noise ratio (SNR) is not very high. It is shown theoretically and numerically that our coding scheme can outperform the Kramer code. In addition, the generalized degrees-of-freedom of our proposed coding scheme is equal to the Suh-Tse scheme in the strong interference case. The numerical results show that our coding scheme can attain better performance than the Suh-Tse coding scheme for all channel parameters. Furthermore, the simplicity of the encoding/decoding algorithms is another strong point of our proposed coding scheme compared with the Suh-Tse coding scheme. More importantly, our results show that an optimal coding scheme for the symmetric Gaussian interference channels with feedback can be achieved by using only marginal posterior distributions under a better cooperation strategy between transmitters.Comment: To appear in Proc. of IEEE International Symposium on Information Theory (ISIT), Hong Kong, June 14-19, 201

    Identification via Quantum Channels in the Presence of Prior Correlation and Feedback

    Full text link
    Continuing our earlier work (quant-ph/0401060), we give two alternative proofs of the result that a noiseless qubit channel has identification capacity 2: the first is direct by a "maximal code with random extension" argument, the second is by showing that 1 bit of entanglement (which can be generated by transmitting 1 qubit) and negligible (quantum) communication has identification capacity 2. This generalises a random hashing construction of Ahlswede and Dueck: that 1 shared random bit together with negligible communication has identification capacity 1. We then apply these results to prove capacity formulas for various quantum feedback channels: passive classical feedback for quantum-classical channels, a feedback model for classical-quantum channels, and "coherent feedback" for general channels.Comment: 19 pages. Requires Rinton-P9x6.cls. v2 has some minor errors/typoes corrected and the claims of remark 22 toned down (proofs are not so easy after all). v3 has references to simultaneous ID coding removed: there were necessary changes in quant-ph/0401060. v4 (final form) has minor correction

    Secret message capacity of a line network

    Full text link
    We investigate the problem of information theoretically secure communication in a line network with erasure channels and state feedback. We consider a spectrum of cases for the private randomness that intermediate nodes can generate, ranging from having intermediate nodes generate unlimited private randomness, to having intermediate nodes generate no private randomness, and all cases in between. We characterize the secret message capacity when either only one of the channels is eavesdropped or all of the channels are eavesdropped, and we develop polynomial time algorithms that achieve these capacities. We also give an outer bound for the case where an arbitrary number of channels is eavesdropped. Our work is the first to characterize the secrecy capacity of a network of arbitrary size, with imperfect channels and feedback. As a side result, we derive the secret key and secret message capacity of an one-hop network, when the source has limited randomness
    corecore