24,378 research outputs found
Noisy Network Coding with Partial DF
In this paper, we propose a noisy network coding integrated with partial
decode-and-forward relaying for single-source multicast discrete memoryless
networks (DMN's). Our coding scheme generalizes the
partial-decode-compress-and-forward scheme (Theorem 7) by Cover and El Gamal.
This is the first time the theorem is generalized for DMN's such that each
relay performs both partial decode-and-forward and compress-and-forward
simultaneously. Our coding scheme simultaneously generalizes both noisy network
coding by Lim, Kim, El Gamal, and Chung and distributed decode-and-forward by
Lim, Kim, and Kim. It is not trivial to combine the two schemes because of
inherent incompatibility in their encoding and decoding strategies. We solve
this problem by sending the same long message over multiple blocks at the
source and at the same time by letting the source find the auxiliary covering
indices that carry information about the message simultaneously over all
blocks.Comment: 5 pages, 1 figure, to appear in Proc. IEEE ISIT 201
Reliable Physical Layer Network Coding
When two or more users in a wireless network transmit simultaneously, their
electromagnetic signals are linearly superimposed on the channel. As a result,
a receiver that is interested in one of these signals sees the others as
unwanted interference. This property of the wireless medium is typically viewed
as a hindrance to reliable communication over a network. However, using a
recently developed coding strategy, interference can in fact be harnessed for
network coding. In a wired network, (linear) network coding refers to each
intermediate node taking its received packets, computing a linear combination
over a finite field, and forwarding the outcome towards the destinations. Then,
given an appropriate set of linear combinations, a destination can solve for
its desired packets. For certain topologies, this strategy can attain
significantly higher throughputs over routing-based strategies. Reliable
physical layer network coding takes this idea one step further: using
judiciously chosen linear error-correcting codes, intermediate nodes in a
wireless network can directly recover linear combinations of the packets from
the observed noisy superpositions of transmitted signals. Starting with some
simple examples, this survey explores the core ideas behind this new technique
and the possibilities it offers for communication over interference-limited
wireless networks.Comment: 19 pages, 14 figures, survey paper to appear in Proceedings of the
IEE
Wyner-Ziv Type Versus Noisy Network Coding For a State-Dependent MAC
We consider a two-user state-dependent multiaccess channel in which the
states of the channel are known non-causally to one of the encoders and only
strictly causally to the other encoder. Both encoders transmit a common message
and, in addition, the encoder that knows the states non-causally transmits an
individual message. We find explicit characterizations of the capacity region
of this communication model in both discrete memoryless and memoryless Gaussian
cases. The analysis also reveals optimal ways of exploiting the knowledge of
the state only strictly causally at the encoder that sends only the common
message when such a knowledge is beneficial. The encoders collaborate to convey
to the decoder a lossy version of the state, in addition to transmitting the
information messages through a generalized Gel'fand-Pinsker binning.
Particularly important in this problem are the questions of 1) optimal ways of
performing the state compression and 2) whether or not the compression indices
should be decoded uniquely. We show that both compression \`a-la noisy network
coding, i.e., with no binning, and compression using Wyner-Ziv binning are
optimal. The scheme that uses Wyner-Ziv binning shares elements with Cover and
El Gamal original compress-and-forward, but differs from it mainly in that
backward decoding is employed instead of forward decoding and the compression
indices are not decoded uniquely. Finally, by exploring the properties of our
outer bound, we show that, although not required in general, the compression
indices can in fact be decoded uniquely essentially without altering the
capacity region, but at the expense of larger alphabets sizes for the auxiliary
random variables.Comment: Submitted for publication to the 2012 IEEE International Symposium on
Information Theory, 5 pages, 1 figur
- …