100 research outputs found
On Coding for Reliable Communication over Packet Networks
We present a capacity-achieving coding scheme for unicast or multicast over
lossy packet networks. In the scheme, intermediate nodes perform additional
coding yet do not decode nor even wait for a block of packets before sending
out coded packets. Rather, whenever they have a transmission opportunity, they
send out coded packets formed from random linear combinations of previously
received packets. All coding and decoding operations have polynomial
complexity.
We show that the scheme is capacity-achieving as long as packets received on
a link arrive according to a process that has an average rate. Thus, packet
losses on a link may exhibit correlation in time or with losses on other links.
In the special case of Poisson traffic with i.i.d. losses, we give error
exponents that quantify the rate of decay of the probability of error with
coding delay. Our analysis of the scheme shows that it is not only
capacity-achieving, but that the propagation of packets carrying "innovative"
information follows the propagation of jobs through a queueing network, and
therefore fluid flow models yield good approximations. We consider networks
with both lossy point-to-point and broadcast links, allowing us to model both
wireline and wireless packet networks.Comment: 33 pages, 6 figures; revised appendi
Further Results on Coding for Reliable Communication over Packet Networks
In "On Coding for Reliable Communication over Packet Networks" (Lun, Medard,
and Effros, Proc. 42nd Annu. Allerton Conf. Communication, Control, and
Computing, 2004), a capacity-achieving coding scheme for unicast or multicast
over lossy wireline or wireless packet networks is presented. We extend that
paper's results in two ways: First, we extend the network model to allow
packets received on a link to arrive according to any process with an average
rate, as opposed to the assumption of Poisson traffic with i.i.d. losses that
was previously made. Second, in the case of Poisson traffic with i.i.d. losses,
we derive error exponents that quantify the rate at which the probability of
error decays with coding delay.Comment: 5 pages; to appear in Proc. 2005 IEEE International Symposium on
Information Theory (ISIT 2005
Heuristics for Network Coding in Wireless Networks
Multicast is a central challenge for emerging multi-hop wireless
architectures such as wireless mesh networks, because of its substantial cost
in terms of bandwidth. In this report, we study one specific case of multicast:
broadcasting, sending data from one source to all nodes, in a multi-hop
wireless network. The broadcast we focus on is based on network coding, a
promising avenue for reducing cost; previous work of ours showed that the
performance of network coding with simple heuristics is asymptotically optimal:
each transmission is beneficial to nearly every receiver. This is for
homogenous and large networks of the plan. But for small, sparse or for
inhomogeneous networks, some additional heuristics are required. This report
proposes such additional new heuristics (for selecting rates) for broadcasting
with network coding. Our heuristics are intended to use only simple local
topology information. We detail the logic of the heuristics, and with
experimental results, we illustrate the behavior of the heuristics, and
demonstrate their excellent performance
Collision Helps - Algebraic Collision Recovery for Wireless Erasure Networks
Current medium access control mechanisms are based on collision avoidance and
collided packets are discarded. The recent work on ZigZag decoding departs from
this approach by recovering the original packets from multiple collisions. In
this paper, we present an algebraic representation of collisions which allows
us to view each collision as a linear combination of the original packets. The
transmitted, colliding packets may themselves be a coded version of the
original packets.
We propose a new acknowledgment (ACK) mechanism for collisions based on the
idea that if a set of packets collide, the receiver can afford to ACK exactly
one of them and still decode all the packets eventually. We analytically
compare delay and throughput performance of such collision recovery schemes
with other collision avoidance approaches in the context of a single hop
wireless erasure network. In the multiple receiver case, the broadcast
constraint calls for combining collision recovery methods with network coding
across packets at the sender. From the delay perspective, our scheme, without
any coordination, outperforms not only a ALOHA-type random access mechanisms,
but also centralized scheduling. For the case of streaming arrivals, we propose
a priority-based ACK mechanism and show that its stability region coincides
with the cut-set bound of the packet erasure network
Wireless Broadcast with Network Coding in Mobile Ad-Hoc Networks: DRAGONCAST
Network coding is a recently proposed method for transmitting data, which has
been shown to have potential to improve wireless network performance. We study
network coding for one specific case of multicast, broadcasting, from one
source to all nodes of the network. We use network coding as a loss tolerant,
energy-efficient, method for broadcast. Our emphasis is on mobile networks. Our
contribution is the proposal of DRAGONCAST, a protocol to perform network
coding in such a dynamically evolving environment. It is based on three
building blocks: a method to permit real-time decoding of network coding, a
method to adjust the network coding transmission rates, and a method for
ensuring the termination of the broadcast. The performance and behavior of the
method are explored experimentally by simulations; they illustrate the
excellent performance of the protocol
Random Linear Network Coding For Time Division Duplexing: Energy Analysis
We study the energy performance of random linear network coding for time
division duplexing channels. We assume a packet erasure channel with nodes that
cannot transmit and receive information simultaneously. The sender transmits
coded data packets back-to-back before stopping to wait for the receiver to
acknowledge the number of degrees of freedom, if any, that are required to
decode correctly the information. Our analysis shows that, in terms of mean
energy consumed, there is an optimal number of coded data packets to send
before stopping to listen. This number depends on the energy needed to transmit
each coded packet and the acknowledgment (ACK), probabilities of packet and ACK
erasure, and the number of degrees of freedom that the receiver requires to
decode the data. We show that its energy performance is superior to that of a
full-duplex system. We also study the performance of our scheme when the number
of coded packets is chosen to minimize the mean time to complete transmission
as in [1]. Energy performance under this optimization criterion is found to be
close to optimal, thus providing a good trade-off between energy and time
required to complete transmissions.Comment: 5 pages, 6 figures, Accepted to ICC 200
Optimality of Network Coding in Packet Networks
We resolve the question of optimality for a well-studied packetized
implementation of random linear network coding, called PNC. In PNC, in contrast
to the classical memoryless setting, nodes store received information in memory
to later produce coded packets that reflect this information. PNC is known to
achieve order optimal stopping times for the many-to-all multicast problem in
many settings.
We give a reduction that captures exactly how PNC and other network coding
protocols use the memory of the nodes. More precisely, we show that any such
protocol implementation induces a transformation which maps an execution of the
protocol to an instance of the classical memoryless setting. This allows us to
prove that, for any (non-adaptive dynamic) network, PNC converges with high
probability in optimal time. In other words, it stops at exactly the first time
in which in hindsight it was possible to route information from the sources to
each receiver individually.
Our technique also applies to variants of PNC, in which each node uses only a
finite buffer. We show that, even in this setting, PNC stops exactly within the
time in which in hindsight it was possible to route packets given the memory
constraint, i.e., that the memory used at each node never exceeds its buffer
size. This shows that PNC, even without any feedback or explicit memory
management, allows to keep minimal buffer sizes while maintaining its capacity
achieving performance
Lightweight Security for Network Coding
Under the emerging network coding paradigm, intermediate nodes in the network
are allowed not only to store and forward packets but also to process and mix
different data flows. We propose a low-complexity cryptographic scheme that
exploits the inherent security provided by random linear network coding and
offers the advantage of reduced overhead in comparison to traditional
end-to-end encryption of the entire data. Confidentiality is achieved by
protecting (or "locking") the source coefficients required to decode the
encoded data, without preventing intermediate nodes from running their standard
network coding operations. Our scheme can be easily combined with existing
techniques that counter active attacks.Comment: Proc. of the IEEE International Conference on Communications (ICC
2008), Beijing, China, May 200
- …