400 research outputs found
Reliable Physical Layer Network Coding
When two or more users in a wireless network transmit simultaneously, their
electromagnetic signals are linearly superimposed on the channel. As a result,
a receiver that is interested in one of these signals sees the others as
unwanted interference. This property of the wireless medium is typically viewed
as a hindrance to reliable communication over a network. However, using a
recently developed coding strategy, interference can in fact be harnessed for
network coding. In a wired network, (linear) network coding refers to each
intermediate node taking its received packets, computing a linear combination
over a finite field, and forwarding the outcome towards the destinations. Then,
given an appropriate set of linear combinations, a destination can solve for
its desired packets. For certain topologies, this strategy can attain
significantly higher throughputs over routing-based strategies. Reliable
physical layer network coding takes this idea one step further: using
judiciously chosen linear error-correcting codes, intermediate nodes in a
wireless network can directly recover linear combinations of the packets from
the observed noisy superpositions of transmitted signals. Starting with some
simple examples, this survey explores the core ideas behind this new technique
and the possibilities it offers for communication over interference-limited
wireless networks.Comment: 19 pages, 14 figures, survey paper to appear in Proceedings of the
IEE
Quantum Reverse Shannon Theorem
Dual to the usual noisy channel coding problem, where a noisy (classical or
quantum) channel is used to simulate a noiseless one, reverse Shannon theorems
concern the use of noiseless channels to simulate noisy ones, and more
generally the use of one noisy channel to simulate another. For channels of
nonzero capacity, this simulation is always possible, but for it to be
efficient, auxiliary resources of the proper kind and amount are generally
required. In the classical case, shared randomness between sender and receiver
is a sufficient auxiliary resource, regardless of the nature of the source, but
in the quantum case the requisite auxiliary resources for efficient simulation
depend on both the channel being simulated, and the source from which the
channel inputs are coming. For tensor power sources (the quantum generalization
of classical IID sources), entanglement in the form of standard ebits
(maximally entangled pairs of qubits) is sufficient, but for general sources,
which may be arbitrarily correlated or entangled across channel inputs,
additional resources, such as entanglement-embezzling states or backward
communication, are generally needed. Combining existing and new results, we
establish the amounts of communication and auxiliary resources needed in both
the classical and quantum cases, the tradeoffs among them, and the loss of
simulation efficiency when auxiliary resources are absent or insufficient. In
particular we find a new single-letter expression for the excess forward
communication cost of coherent feedback simulations of quantum channels (i.e.
simulations in which the sender retains what would escape into the environment
in an ordinary simulation), on non-tensor-power sources in the presence of
unlimited ebits but no other auxiliary resource. Our results on tensor power
sources establish a strong converse to the entanglement-assisted capacity
theorem.Comment: 35 pages, to appear in IEEE-IT. v2 has a fixed proof of the Clueless
Eve result, a new single-letter formula for the "spread deficit", better
error scaling, and an improved strong converse. v3 and v4 each make small
improvements to the presentation and add references. v5 fixes broken
reference
Optimal Linear Network Coding When 3 Nodes Communicate Over Broadcast Erasure Channels with ACK
This work considers the following scenario: Three nodes {1, 2, 3} would like to communicate with each other by sending packets through unreliable wireless medium.We consider the most general unicast traffic demands. Namely, there are six co-existing unicast flows with rates (R1--\u3e2,R1--\u3e3,R2--\u3e1,R2--\u3e3, R3--\u3e1,R3--\u3e2). When a node broadcasts a packet, a random subset of the other two nodes will receive the packet. After each transmission, causal ACKnowledgment is sent so that all nodes know whether the other nodes have received the packet or not. Such a setting has many unique features. For example, each node, say node 1, can assume many different roles: Being the transmitter of the information R1--\u3e2 and R1--\u3e3; being the receiver of the information R2--\u3e1 and R3--\u3e1; and being the relay for the information R2--\u3e3 and R3--\u3e2. This fully captures the fundamental behaviors of 3-node network communications. Allowing network coding (NC) to capitalize the diversity gain (i.e., overhearing packets transmitted by other nodes), this work characterizes the 6-dimensional linear network coding (LNC) capacity of the above erasure network. The results show that for any channel parameters, the LNC capacity can be achieved by a simple strategy that involves only a few LNC choices
When all information is not created equal
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 191-196).Following Shannon's landmark paper, the classical theoretical framework for communication is based on a simplifying assumption that all information is equally important, thus aiming to provide a uniform protection to all information. However, this homogeneous view of information is not suitable for a variety of modern-day communication scenarios such as wireless and sensor networks, video transmission, interactive systems, and control applications. For example, an emergency alarm from a sensor network needs more protection than other transmitted information. Similarly, the coarse resolution of an image needs better protection than its finer details. For such heterogeneous information, if providing a uniformly high protection level to all parts of the information is infeasible, it is desirable to provide different protection levels based on the importance of those parts. The main objective of this thesis is to extend classical information theory to address this heterogeneous nature of information. Many theoretical tools needed for this are fundamentally different from the conventional homogeneous setting. One key issue is that bits are no more a sufficient measure of information. We develop a general framework for understanding the fundamental limits of transmitting such information, calculate such fundamental limits, and provide optimal architectures for achieving these limits. Our analysis shows that even without sacrificing the data-rate from channel capacity, some crucial parts of information can be protected with exponential reliability. This research would challenge the notion that a set of homogenous bits should necessarily be viewed as a universal interface to the physical layer; this potentially impacts the design of network architectures. This thesis also develops two novel approaches for simplifying such difficult problems in information theory. Our formulations are based on ideas from graphical models and Euclidean geometry and provide canonical examples for network information theory. They provide fresh insights into previously intractable problems as well as generalize previous related results.by Shashibhushan Prataprao Borade.Ph.D
Spectrum Sensing and Security Challenges and Solutions: Contemporary Affirmation of the Recent Literature
Cognitive radio (CR) has been recently proposed as a promising technology to improve spectrum utilization by enabling secondary access to unused licensed bands. A prerequisite to this secondary access is having no interference to the primary system. This requirement makes spectrum sensing a key function in cognitive radio systems. Among common spectrum sensing techniques, energy detection is an engaging method due to its simplicity and efficiency. However, the major disadvantage of energy detection is the hidden node problem, in which the sensing node cannot distinguish between an idle and a deeply faded or shadowed band. Cooperative spectrum sensing (CSS) which uses a distributed detection model has been considered to overcome that problem. On other dimension of this cooperative spectrum sensing, this is vulnerable to sensing data falsification attacks due to the distributed nature of cooperative spectrum sensing. As the goal of a sensing data falsification attack is to cause an incorrect decision on the presence/absence of a PU signal, malicious or compromised SUs may intentionally distort the measured RSSs and share them with other SUs. Then, the effect of erroneous sensing results propagates to the entire CRN. This type of attacks can be easily launched since the openness of programmable software defined radio (SDR) devices makes it easy for (malicious or compromised) SUs to access low layer protocol stacks, such as PHY and MAC. However, detecting such attacks is challenging due to the lack of coordination between PUs and SUs, and unpredictability in wireless channel signal propagation, thus calling for efficient mechanisms to protect CRNs. Here in this paper we attempt to perform contemporary affirmation of the recent literature of benchmarking strategies that enable the trusted and secure cooperative spectrum sensing among Cognitive Radios
- …