1,776 research outputs found
Understanding CHOKe: throughput and spatial characteristics
A recently proposed active queue management, CHOKe, is stateless, simple to implement, yet surprisingly effective in protecting TCP from UDP flows. We present an equilibrium model of TCP/CHOKe. We prove that, provided the number of TCP flows is large, the UDP bandwidth share peaks at (e+1)/sup -1/=0.269 when UDP input rate is slightly larger than link capacity, and drops to zero as UDP input rate tends to infinity. We clarify the spatial characteristics of the leaky buffer under CHOKe that produce this throughput behavior. Specifically, we prove that, as UDP input rate increases, even though the total number of UDP packets in the queue increases, their spatial distribution becomes more and more concentrated near the tail of the queue, and drops rapidly to zero toward the head of the queue. In stark contrast to a nonleaky FIFO buffer where UDP bandwidth shares would approach 1 as its input rate increases without bound, under CHOKe, UDP simultaneously maintains a large number of packets in the queue and receives a vanishingly small bandwidth share, the mechanism through which CHOKe protects TCP flows
First-Passage Time and Large-Deviation Analysis for Erasure Channels with Memory
This article considers the performance of digital communication systems
transmitting messages over finite-state erasure channels with memory.
Information bits are protected from channel erasures using error-correcting
codes; successful receptions of codewords are acknowledged at the source
through instantaneous feedback. The primary focus of this research is on
delay-sensitive applications, codes with finite block lengths and, necessarily,
non-vanishing probabilities of decoding failure. The contribution of this
article is twofold. A methodology to compute the distribution of the time
required to empty a buffer is introduced. Based on this distribution, the mean
hitting time to an empty queue and delay-violation probabilities for specific
thresholds can be computed explicitly. The proposed techniques apply to
situations where the transmit buffer contains a predetermined number of
information bits at the onset of the data transfer. Furthermore, as additional
performance criteria, large deviation principles are obtained for the empirical
mean service time and the average packet-transmission time associated with the
communication process. This rigorous framework yields a pragmatic methodology
to select code rate and block length for the communication unit as functions of
the service requirements. Examples motivated by practical systems are provided
to further illustrate the applicability of these techniques.Comment: To appear in IEEE Transactions on Information Theor
Asymptotic analysis by the saddle point method of the Anick-Mitra-Sondhi model
We consider a fluid queue where the input process consists of N identical
sources that turn on and off at exponential waiting times. The server works at
the constant rate c and an on source generates fluid at unit rate. This model
was first formulated and analyzed by Anick, Mitra and Sondhi. We obtain an
alternate representation of the joint steady state distribution of the buffer
content and the number of on sources. This is given as a contour integral that
we then analyze for large N. We give detailed asymptotic results for the joint
distribution, as well as the associated marginal and conditional distributions.
In particular, simple conditional limits laws are obtained. These shows how the
buffer content behaves conditioned on the number of active sources and vice
versa. Numerical comparisons show that our asymptotic results are very accurate
even for N=20
A Systematic Approach to Incremental Redundancy over Erasure Channels
As sensing and instrumentation play an increasingly important role in systems
controlled over wired and wireless networks, the need to better understand
delay-sensitive communication becomes a prime issue. Along these lines, this
article studies the operation of data links that employ incremental redundancy
as a practical means to protect information from the effects of unreliable
channels. Specifically, this work extends a powerful methodology termed
sequential differential optimization to choose near-optimal block sizes for
hybrid ARQ over erasure channels. In doing so, an interesting connection
between random coding and well-known constants in number theory is established.
Furthermore, results show that the impact of the coding strategy adopted and
the propensity of the channel to erase symbols naturally decouple when
analyzing throughput. Overall, block size selection is motivated by normal
approximations on the probability of decoding success at every stage of the
incremental transmission process. This novel perspective, which rigorously
bridges hybrid ARQ and coding, offers a pragmatic means to select code rates
and blocklengths for incremental redundancy.Comment: 7 pages, 2 figures; A shorter version of this article will appear in
the proceedings of ISIT 201
- …