42,294 research outputs found

    Capacity Definitions for General Channels with Receiver Side Information

    Get PDF
    We consider three capacity definitions for general channels with channel side information at the receiver, where the channel is modeled as a sequence of finite dimensional conditional distributions not necessarily stationary, ergodic, or information stable. The {\em Shannon capacity} is the highest rate asymptotically achievable with arbitrarily small error probability. The {\em capacity versus outage} is the highest rate asymptotically achievable with a given probability of decoder-recognized outage. The {\em expected capacity} is the highest average rate asymptotically achievable with a single encoder and multiple decoders, where the channel side information determines the decoder in use. As a special case of channel codes for expected rate, the code for capacity versus outage has two decoders: one operates in the non-outage states and decodes all transmitted information, and the other operates in the outage states and decodes nothing. Expected capacity equals Shannon capacity for channels governed by a stationary ergodic random process but is typically greater for general channels. These alternative capacity definitions essentially relax the constraint that all transmitted information must be decoded at the receiver. We derive capacity theorems for these capacity definitions through information density. Numerical examples are provided to demonstrate their connections and differences. We also discuss the implication of these alternative capacity definitions for end-to-end distortion, source-channel coding and separation.Comment: Submitted to IEEE Trans. Inform. Theory, April 200

    Capacity definitions and coding strategies for general channels with receiver side information

    Get PDF
    We consider three capacity definitions for a channel with channel side information at the receiver. The capacity is the highest rate asymptotically achievable. The outage capacity is the highest rate asymptotically achievable with a given probability of decoder-recognized outage. The expected capacity is the highest expected rate asymptotically achievable using a single encoder and multiple decoders, where side information at the decoder determines which code to use. We motivate the latter definitions using the concept of maximizing reliably received rate. A coding theorem is given for each capacity

    Generalizing Capacity: New Definitions and Capacity Theorems for Composite Channels

    Get PDF
    We consider three capacity definitions for composite channels with channel side information at the receiver. A composite channel consists of a collection of different channels with a distribution characterizing the probability that each channel is in operation. The Shannon capacity of a channel is the highest rate asymptotically achievable with arbitrarily small error probability. Under this definition, the transmission strategy used to achieve the capacity must achieve arbitrarily small error probability for all channels in the collection comprising the composite channel. The resulting capacity is dominated by the worst channel in its collection, no matter how unlikely that channel is. We, therefore, broaden the definition of capacity to allow for some outage. The capacity versus outage is the highest rate asymptotically achievable with a given probability of decoder-recognized outage. The expected capacity is the highest average rate asymptotically achievable with a single encoder and multiple decoders, where channel side information determines the channel in use. The expected capacity is a generalization of capacity versus outage since codes designed for capacity versus outage decode at one of two rates (rate zero when the channel is in outage and the target rate otherwise) while codes designed for expected capacity can decode at many rates. Expected capacity equals Shannon capacity for channels governed by a stationary ergodic random process but is typically greater for general channels. The capacity versus outage and expected capacity definitions relax the constraint that all transmitted information must be decoded at the receiver. We derive channel coding theorems for these capacity definitions through information density and provide numerical examples to highlight their connections and differences. We also discuss the implications of these alternative capacity definitions for end-to-end distortion, source-channel coding, and separation

    Distortion Metrics of Composite Channels with Receiver Side Information

    Get PDF
    We consider transmission of stationary ergodic sources over non-ergodic composite channels with channel state information at the receiver (CSIR). Previously we introduced alternative capacity definitions to Shannon capacity, including outage and expected capacity. These generalized definitions relax the constraint of Shannon capacity that all transmitted information must be decoded at the receiver. In this work alternative end- to-end distortion metrics such as outage and expected distortion are introduced to relax the constraint that a single distortion level has to be maintained for all channel states. Through the example of transmission of a Gaussian source over a slow-fading Gaussian channel, we illustrate that the end-to-end distortion metrics dictate whether the source and channel coding can be separated for a communication system. We also show that the source and channel need to exchange information through an appropriate interface to facilitate separate encoding and decoding

    Joint Network and Gelfand-Pinsker Coding for 3-Receiver Gaussian Broadcast Channels with Receiver Message Side Information

    Full text link
    The problem of characterizing the capacity region for Gaussian broadcast channels with receiver message side information appears difficult and remains open for N >= 3 receivers. This paper proposes a joint network and Gelfand-Pinsker coding method for 3-receiver cases. Using the method, we establish a unified inner bound on the capacity region of 3-receiver Gaussian broadcast channels under general message side information configuration. The achievability proof of the inner bound uses an idea of joint interference cancelation, where interference is canceled by using both dirty-paper coding at the encoder and successive decoding at some of the decoders. We show that the inner bound is larger than that achieved by state of the art coding schemes. An outer bound is also established and shown to be tight in 46 out of all 64 possible cases.Comment: Author's final version (presented at the 2014 IEEE International Symposium on Information Theory [ISIT 2014]

    Compound Multiple Access Channels with Partial Cooperation

    Full text link
    A two-user discrete memoryless compound multiple access channel with a common message and conferencing decoders is considered. The capacity region is characterized in the special cases of physically degraded channels and unidirectional cooperation, and achievable rate regions are provided for the general case. The results are then extended to the corresponding Gaussian model. In the Gaussian setup, the provided achievable rates are shown to lie within some constant number of bits from the boundary of the capacity region in several special cases. An alternative model, in which the encoders are connected by conferencing links rather than having a common message, is studied as well, and the capacity region for this model is also determined for the cases of physically degraded channels and unidirectional cooperation. Numerical results are also provided to obtain insights about the potential gains of conferencing at the decoders and encoders.Comment: Submitted to IEEE Transactions on Information Theor

    The Approximate Capacity of the Many-to-One and One-to-Many Gaussian Interference Channels

    Full text link
    Recently, Etkin, Tse, and Wang found the capacity region of the two-user Gaussian interference channel to within one bit/s/Hz. A natural goal is to apply this approach to the Gaussian interference channel with an arbitrary number of users. We make progress towards this goal by finding the capacity region of the many-to-one and one-to-many Gaussian interference channels to within a constant number of bits. The result makes use of a deterministic model to provide insight into the Gaussian channel. The deterministic model makes explicit the dimension of signal scale. A central theme emerges: the use of lattice codes for alignment of interfering signals on the signal scale.Comment: 45 pages, 16 figures. Submitted to IEEE Transactions on Information Theor
    • ā€¦
    corecore