286 research outputs found
Reliable Physical Layer Network Coding
When two or more users in a wireless network transmit simultaneously, their
electromagnetic signals are linearly superimposed on the channel. As a result,
a receiver that is interested in one of these signals sees the others as
unwanted interference. This property of the wireless medium is typically viewed
as a hindrance to reliable communication over a network. However, using a
recently developed coding strategy, interference can in fact be harnessed for
network coding. In a wired network, (linear) network coding refers to each
intermediate node taking its received packets, computing a linear combination
over a finite field, and forwarding the outcome towards the destinations. Then,
given an appropriate set of linear combinations, a destination can solve for
its desired packets. For certain topologies, this strategy can attain
significantly higher throughputs over routing-based strategies. Reliable
physical layer network coding takes this idea one step further: using
judiciously chosen linear error-correcting codes, intermediate nodes in a
wireless network can directly recover linear combinations of the packets from
the observed noisy superpositions of transmitted signals. Starting with some
simple examples, this survey explores the core ideas behind this new technique
and the possibilities it offers for communication over interference-limited
wireless networks.Comment: 19 pages, 14 figures, survey paper to appear in Proceedings of the
IEE
Computation in Multicast Networks: Function Alignment and Converse Theorems
The classical problem in network coding theory considers communication over
multicast networks. Multiple transmitters send independent messages to multiple
receivers which decode the same set of messages. In this work, computation over
multicast networks is considered: each receiver decodes an identical function
of the original messages. For a countably infinite class of two-transmitter
two-receiver single-hop linear deterministic networks, the computing capacity
is characterized for a linear function (modulo-2 sum) of Bernoulli sources.
Inspired by the geometric concept of interference alignment in networks, a new
achievable coding scheme called function alignment is introduced. A new
converse theorem is established that is tighter than cut-set based and
genie-aided bounds. Computation (vs. communication) over multicast networks
requires additional analysis to account for multiple receivers sharing a
network's computational resources. We also develop a network decomposition
theorem which identifies elementary parallel subnetworks that can constitute an
original network without loss of optimality. The decomposition theorem provides
a conceptually-simpler algebraic proof of achievability that generalizes to
-transmitter -receiver networks.Comment: to appear in the IEEE Transactions on Information Theor
Capacity Theorems for Quantum Multiple Access Channels: Classical-Quantum and Quantum-Quantum Capacity Regions
We consider quantum channels with two senders and one receiver. For an
arbitrary such channel, we give multi-letter characterizations of two different
two-dimensional capacity regions. The first region is comprised of the rates at
which it is possible for one sender to send classical information, while the
other sends quantum information. The second region consists of the rates at
which each sender can send quantum information. For each region, we give an
example of a channel for which the corresponding region has a single-letter
description. One of our examples relies on a new result proved here, perhaps of
independent interest, stating that the coherent information over any degradable
channel is concave in the input density operator. We conclude with connections
to other work and a discussion on generalizations where each user
simultaneously sends classical and quantum information.Comment: 38 pages, 1 figure. Fixed typos, added new example. Submitted to IEEE
Tranactions on Information Theor
Capacity Bounds For Multi-User Channels With Feedback, Relaying and Cooperation
Recent developments in communications are driven by the goal of
achieving high data rates for wireless communication devices. To
achieve this goal, several new phenomena need to be investigated
from an information theoretic perspective. In this dissertation,
we focus on three of these phenomena: feedback, relaying and
cooperation. We study these phenomena for various multi-user
channels from an information theoretic point of view.
One of the aims of this dissertation is to study the performance
limits of simple wireless networks, for various forms of feedback
and cooperation. Consider an uplink communication system, where
several users wish to transmit independent data to a base-station.
If the base-station can send feedback to the users, one can expect
to achieve higher data-rates since feedback can enable cooperation
among the users. Another way to improve data-rates is to make use
of the broadcast nature of the wireless medium, where the users
can overhear each other's transmitted signals. This particular
phenomenon has garnered much attention lately, where users can
help in increasing each other's data-rates by utilizing the
overheard information. This overheard information can be
interpreted as a generalized form of feedback.
To take these several models of feedback and cooperation into
account, we study the two-user multiple access channel and the
two-user interference channel with generalized feedback. For all
these models, we derive new outer bounds on their capacity
regions. We specialize these results for noiseless feedback,
additive noisy feedback and user-cooperation models and show
strict improvements over the previously known bounds.
Next, we study state-dependent channels with rate-limited state
information to the receiver or to the transmitter. This
state-dependent channel models a practical situation of fading,
where the fade information is partially available to the receiver
or to the transmitter. We derive new bounds on the capacity of
such channels and obtain capacity results for a special sub-class
of such channels.
We study the effect of relaying by considering the parallel relay
network, also known as the diamond channel. The parallel relay
network considered in this dissertation comprises of a cascade of
a general broadcast channel to the relays and an orthogonal
multiple access channel from the relays to the receiver. We
characterize the capacity of the diamond channel, when the
broadcast channel is deterministic. We also study the diamond
channel with partially separated relays, and obtain capacity
results when the broadcast channel is either semi-deterministic or
physically degraded. Our results also demonstrate that feedback to
the relays can strictly increase the capacity of the diamond
channel.
In several sensor network applications, distributed lossless
compression of sources is of considerable interest. The presence
of adversarial nodes makes it important to design compression
schemes which serve the dual purpose of reliable source
transmission to legitimate nodes while minimizing the information
leakage to the adversarial nodes. Taking this constraint into
account, we consider information theoretic secrecy, where our aim
is to limit the information leakage to the eavesdropper. For this
purpose, we study a secure source coding problem with coded side
information from a helper to the legitimate user. We derive the
rate-equivocation region for this problem. We show that the helper
node serves the dual purpose of reducing the source transmission
rate and increasing the uncertainty at the adversarial node. Next,
we considered two different secure source coding models and
provide the corresponding rate-equivocation regions
Capacity and coding in digital communications
+164hlm.;24c
Near-capacity fixed-rate and rateless channel code constructions
Fixed-rate and rateless channel code constructions are designed for satisfying conflicting design tradeoffs, leading to codes that benefit from practical implementations, whilst offering a good bit error ratio (BER) and block error ratio (BLER) performance. More explicitly, two novel low-density parity-check code (LDPC) constructions are proposed; the first construction constitutes a family of quasi-cyclic protograph LDPC codes, which has a Vandermonde-like parity-check matrix (PCM). The second construction constitutes a specific class of protograph LDPC codes, which are termed as multilevel structured (MLS) LDPC codes. These codes possess a PCM construction that allows the coexistence of both pseudo-randomness as well as a structure requiring a reduced memory. More importantly, it is also demonstrated that these benefits accrue without any compromise in the attainable BER/BLER performance. We also present the novel concept of separating multiple users by means of user-specific channel codes, which is referred to as channel code division multiple access (CCDMA), and provide an example based on MLS LDPC codes. In particular, we circumvent the difficulty of having potentially high memory requirements, while ensuring that each user’s bits in the CCDMA system are equally protected. With regards to rateless channel coding, we propose a novel family of codes, which we refer to as reconfigurable rateless codes, that are capable of not only varying their code-rate but also to adaptively modify their encoding/decoding strategy according to the near-instantaneous channel conditions. We demonstrate that the proposed reconfigurable rateless codes are capable of shaping their own degree distribution according to the nearinstantaneous requirements imposed by the channel, but without any explicit channel knowledge at the transmitter. Additionally, a generalised transmit preprocessing aided closed-loop downlink multiple-input multiple-output (MIMO) system is presented, in which both the channel coding components as well as the linear transmit precoder exploit the knowledge of the channel state information (CSI). More explicitly, we embed a rateless code in a MIMO transmit preprocessing scheme, in order to attain near-capacity performance across a wide range of channel signal-to-ratios (SNRs), rather than only at a specific SNR. The performance of our scheme is further enhanced with the aid of a technique, referred to as pilot symbol assisted rateless (PSAR) coding, whereby a predetermined fraction of pilot bits is appropriately interspersed with the original information bits at the channel coding stage, instead of multiplexing pilots at the modulation stage, as in classic pilot symbol assisted modulation (PSAM). We subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more information from the inserted pilots than the classic PSAM technique, because the pilot bits are not only useful for sounding the channel at the receiver but also beneficial for significantly reducing the computational complexity of the rateless channel decoder
Information theoretic bounds for distributed computation
Includes bibliographical references (p. 101-103).Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.(cont.) In the second formulation, each node has an initial real-valued measurement. Nodes communicate their values via a network with fixed topology and noisy channels between nodes that are linked. The goal is for each node to estimate a given function of all the initial values in the network, so that the mean square error in the estimate is within a prescribed interval. Here, the nodes do not know the distribution of the source, but have unlimited computation power to run whatever algorithm needed to ensure the mean square error criterion. The question is: how does the communication network impact the time until the performance criterion is guaranteed. Using Information Theoretic inequalities, I derive an algorithm-independent lower bound on the computation time. The bound is a function of the uncertainty in the function to be estimated, via its differential entropy, and the desired accuracy level, as specified by the mean square error criterion. Next, I demonstrate the use of this bound in a scenario where nodes communicate through erasure channels to learn a linear function of all the node's initial values. For this scenario, I describe an algorithm whose running time, until with high probability all nodes' estimates lie within a prescribed interval of the true value, is reciprocally related to the "conductance." Conductance quantifies the information flow "bottle-neck" in the network and hence captures the effect of the topology and capacities. Using the lower bound, I show that the running time of any algorithm that guarantees the aforementioned probability criterion, must scale reciprocally with conductance. Thus, the lower bound is tight in capturing the effect of network topology via conductance; conversely, the running time of our algorithm is optimal with respect to its dependence on conductance.In this thesis, I explore via two formulations the impact of communication constraints on distributed computation. In both formulations, nodes make partial observations of an underlying source. They communicate in order to compute a given function of all the measurements in the network, to within a desired level of error. Such computation in networks arises in various contexts, like wireless and sensor networks, consensus and belief propagation with bit constraints, and estimation of a slowly evolving process. By utilizing Information Theoretic formulations and tools, I obtain code- or algorithm-independent lower bounds that capture fundamental limits imposed by the communication network. In the first formulation, each node samples a component of a source whose values belong to a field of order q. The nodes utilize their knowledge of the joint probability mass function of the components together with the function to be computed to efficiently compress their messages, which are then broadcast. The question is: how many bits per sample are necessary and sufficient for each node to broadcast in order for the probability of decoding error to approach zero as the number of samples grows. I find that when there are two nodes in the network seeking to compute the sample-wise modulo-q sum of their measurements, a node compressing so that the other can compute the modulo-q sum is no more efficient than its compressing so that the actual data sequence is decoded. However, when there are more than two nodes, we demonstrate that there exists a joint probability mass function for which nodes can more efficiently compress so that the modulo-q sum is decoded with probability of error asymptotically approaching zero. It is both necessary and sufficient for nodes to send a smaller number of bits per sample than they would have to in order for all nodes to acquire all the data sequences in the network.by Ola Ayaso.Ph.D
Two-Layer Coded Channel Access With Collision Resolution: Design and Analysis
We propose a two-layer coding architecture for communication of multiple users over a shared slotted medium enabling joint collision resolution and decoding. Each user first encodes its information bits with an outer code for reliability, and then transmits these coded bits with possible repetitions over transmission time slots of the access channel. The transmission patterns are dictated by the inner collision-resolution code and collisions with other users’ transmissions may occur. We analyze two types of codes for the outer layer: long-blocklength LDPC codes, and short-blocklength algebraic codes. With LDPC codes, a density evolution analysis enables joint optimization of both outer and inner code parameters for maximum throughput. With algebraic codes, we invoke a similar analysis by approximating their average erasure correcting capability while assuming a large number of active transmitters. The proposed low-complexity schemes operate at a significantly smaller gap to capacity than the state of the art. Our schemes apply both to a multiple access scenario where the number of users within a frame is known a priori, and to a random access scenario where that number is known only to the decoder. In the latter case, we optimize an outage probability due to the variability in user activity
- …