86,962 research outputs found
Generalizing Capacity: New Definitions and Capacity Theorems for Composite Channels
We consider three capacity definitions for composite
channels with channel side information at the receiver. A composite
channel consists of a collection of different channels with a
distribution characterizing the probability that each channel is in
operation. The Shannon capacity of a channel is the highest rate
asymptotically achievable with arbitrarily small error probability.
Under this definition, the transmission strategy used to achieve
the capacity must achieve arbitrarily small error probability for
all channels in the collection comprising the composite channel.
The resulting capacity is dominated by the worst channel in its
collection, no matter how unlikely that channel is. We, therefore,
broaden the definition of capacity to allow for some outage.
The capacity versus outage is the highest rate asymptotically
achievable with a given probability of decoder-recognized outage.
The expected capacity is the highest average rate asymptotically
achievable with a single encoder and multiple decoders, where
channel side information determines the channel in use. The
expected capacity is a generalization of capacity versus outage
since codes designed for capacity versus outage decode at one of
two rates (rate zero when the channel is in outage and the target
rate otherwise) while codes designed for expected capacity can
decode at many rates. Expected capacity equals Shannon capacity
for channels governed by a stationary ergodic random process
but is typically greater for general channels. The capacity versus
outage and expected capacity definitions relax the constraint that
all transmitted information must be decoded at the receiver. We
derive channel coding theorems for these capacity definitions
through information density and provide numerical examples to
highlight their connections and differences. We also discuss the
implications of these alternative capacity definitions for end-to-end
distortion, source-channel coding, and separation
Sparse Regression Codes for Multi-terminal Source and Channel Coding
We study a new class of codes for Gaussian multi-terminal source and channel
coding. These codes are designed using the statistical framework of
high-dimensional linear regression and are called Sparse Superposition or
Sparse Regression codes. Codewords are linear combinations of subsets of
columns of a design matrix. These codes were recently introduced by Barron and
Joseph and shown to achieve the channel capacity of AWGN channels with
computationally feasible decoding. They have also recently been shown to
achieve the optimal rate-distortion function for Gaussian sources. In this
paper, we demonstrate how to implement random binning and superposition coding
using sparse regression codes. In particular, with minimum-distance
encoding/decoding it is shown that sparse regression codes attain the optimal
information-theoretic limits for a variety of multi-terminal source and channel
coding problems.Comment: 9 pages, appeared in the Proceedings of the 50th Annual Allerton
Conference on Communication, Control, and Computing - 201
Bounds on entanglement assisted source-channel coding via the Lovasz theta number and its variants
We study zero-error entanglement assisted source-channel coding
(communication in the presence of side information). Adapting a technique of
Beigi, we show that such coding requires existence of a set of vectors
satisfying orthogonality conditions related to suitably defined graphs and
. Such vectors exist if and only if where represents the Lov\'asz number. We
also obtain similar inequalities for the related Schrijver and
Szegedy numbers.
These inequalities reproduce several known bounds and also lead to new
results. We provide a lower bound on the entanglement assisted cost rate. We
show that the entanglement assisted independence number is bounded by the
Schrijver number: . Therefore, we are able to
disprove the conjecture that the one-shot entanglement-assisted zero-error
capacity is equal to the integer part of the Lov\'asz number. Beigi introduced
a quantity as an upper bound on and posed the question of
whether . We answer this in the
affirmative and show that a related quantity is equal to . We show that a quantity recently introduced
in the context of Tsirelson's conjecture is equal to .
In an appendix we investigate multiplicativity properties of Schrijver's and
Szegedy's numbers, as well as projective rank.Comment: Fixed proof of multiplicativity; more connections to prior work in
conclusion; many changes in expositio
Capacity of wireless erasure networks
In this paper, a special class of wireless networks, called wireless erasure networks, is considered. In these networks, each node is connected to a set of nodes by possibly correlated erasure channels. The network model incorporates the broadcast nature of the wireless environment by requiring each node to send the same signal on all outgoing channels. However, we assume there is no interference in reception. Such models are therefore appropriate for wireless networks where all information transmission is packetized and where some mechanism for interference avoidance is already built in. This paper looks at multicast problems over these networks. The capacity under the assumption that erasure locations on all the links of the network are provided to the destinations is obtained. It turns out that the capacity region has a nice max-flow min-cut interpretation. The definition of cut-capacity in these networks incorporates the broadcast property of the wireless medium. It is further shown that linear coding at nodes in the network suffices to achieve the capacity region. Finally, the performance of different coding schemes in these networks when no side information is available to the destinations is analyzed
- …