2,387 research outputs found
Bits Through Bufferless Queues
This paper investigates the capacity of a channel in which information is
conveyed by the timing of consecutive packets passing through a queue with
independent and identically distributed service times. Such timing channels are
commonly studied under the assumption of a work-conserving queue. In contrast,
this paper studies the case of a bufferless queue that drops arriving packets
while a packet is in service. Under this bufferless model, the paper provides
upper bounds on the capacity of timing channels and establishes achievable
rates for the case of bufferless M/M/1 and M/G/1 queues. In particular, it is
shown that a bufferless M/M/1 queue at worst suffers less than 10% reduction in
capacity when compared to an M/M/1 work-conserving queue.Comment: 8 pages, 3 figures, accepted in 51st Annual Allerton Conference on
Communication, Control, and Computing, University of Illinois, Monticello,
Illinois, Oct 2-4, 201
Estimating customer impatience in a service system with unobserved balking
This paper studies a service system in which arriving customers are provided
with information about the delay they will experience. Based on this
information they decide to wait for service or to leave the system. The main
objective is to estimate the customers' patience-level distribution and the
corresponding potential arrival rate, using knowledge of the actual
queue-length process only. The main complication, and distinguishing feature of
our setup, lies in the fact that customers who decide not to join are not
observed, but, remarkably, we manage to devise a procedure to estimate the load
they would generate. We express our system in terms of a multi-server queue
with a Poisson stream of customers, which allows us to evaluate the
corresponding likelihood function. Estimating the unknown parameters relying on
a maximum likelihood procedure, we prove strong consistency and derive the
asymptotic distribution of the estimation error. Several applications and
extensions of the method are discussed. The performance of our approach is
further assessed through a series of numerical experiments. By fitting
parameters of hyperexponential and generalized-hyperexponential distributions
our method provides a robust estimation framework for any continuous
patience-level distribution
Approximations and Bounds for (n, k) Fork-Join Queues: A Linear Transformation Approach
Compared to basic fork-join queues, a job in (n, k) fork-join queues only
needs its k out of all n sub-tasks to be finished. Since (n, k) fork-join
queues are prevalent in popular distributed systems, erasure coding based cloud
storages, and modern network protocols like multipath routing, estimating the
sojourn time of such queues is thus critical for the performance measurement
and resource plan of computer clusters. However, the estimating keeps to be a
well-known open challenge for years, and only rough bounds for a limited range
of load factors have been given. In this paper, we developed a closed-form
linear transformation technique for jointly-identical random variables: An
order statistic can be represented by a linear combination of maxima. This
brand-new technique is then used to transform the sojourn time of non-purging
(n, k) fork-join queues into a linear combination of the sojourn times of basic
(k, k), (k+1, k+1), ..., (n, n) fork-join queues. Consequently, existing
approximations for basic fork-join queues can be bridged to the approximations
for non-purging (n, k) fork-join queues. The uncovered approximations are then
used to improve the upper bounds for purging (n, k) fork-join queues.
Simulation experiments show that this linear transformation approach is
practiced well for moderate n and relatively large k.Comment: 10 page
Modeling, Analysis and Impact of a Long Transitory Phase in Random Access Protocols
In random access protocols, the service rate depends on the number of
stations with a packet buffered for transmission. We demonstrate via numerical
analysis that this state-dependent rate along with the consideration of Poisson
traffic and infinite (or large enough to be considered infinite) buffer size
may cause a high-throughput and extremely long (in the order of hours)
transitory phase when traffic arrivals are right above the stability limit. We
also perform an experimental evaluation to provide further insight into the
characterisation of this transitory phase of the network by analysing
statistical properties of its duration. The identification of the presence as
well as the characterisation of this behaviour is crucial to avoid
misprediction, which has a significant potential impact on network performance
and optimisation. Furthermore, we discuss practical implications of this
finding and propose a distributed and low-complexity mechanism to keep the
network operating in the high-throughput phase.Comment: 13 pages, 10 figures, Submitted to IEEE/ACM Transactions on
Networkin
An acceleration simulation method for power law priority traffic
A method for accelerated simulation for simulated self-similar processes is proposed. This technique simplifies
the simulation model and improves the efficiency by using excess packets instead of packet-by-packet source traffic for a FIFO and non-FIFO buffer scheduler. In this research is focusing on developing an equivalent model of the conventional packet buffer that can produce an output analysis (which in this case will be the steady state probability) much faster. This acceleration simulation method is a further development of the Traffic Aggregation technique, which had previously been applied to FIFO buffers only and applies the Generalized Ballot Theorem to calculate the waiting time for the low priority traffic (combined with prior work on traffic aggregation). This hybrid method is shown to provide a significant reduction in the process time, while maintaining queuing behavior in the buffer that is highly accurate when compared to results from a conventional simulatio
- …