10,423 research outputs found
Transform-domain analysis of packet delay in network nodes with QoS-aware scheduling
In order to differentiate the perceived QoS between traffic classes in heterogeneous packet networks, equipment discriminates incoming packets based on their class, particularly in the way queued packets are scheduled for further transmission. We review a common stochastic modelling framework in which scheduling mechanisms can be evaluated, especially with regard to the resulting per-class delay distribution. For this, a discrete-time single-server queue is considered with two classes of packet arrivals, either delay-sensitive (1) or delay-tolerant (2). The steady-state analysis relies on the use of well-chosen supplementary variables and is mainly done in the transform domain. Secondly, we propose and analyse a new type of scheduling mechanism that allows precise control over the amount of delay differentiation between the classes. The idea is to introduce N reserved places in the queue, intended for future arrivals of class 1
Delay analysis of a place reservation queue with heterogeneous service requirements
We study the delay performance of a queue with a reservation-based priority scheduling mechanism. The objective
is to provide a better quality of service to delay-sensitive packets at the cost of allowing higher delays for the best-effort
packets. In our model, we consider a discrete-time single-server queue with general independent arrivals of
class 1 (delay-sensitive) and class 2 (best-effort). The scheduling mechanism makes use of an in-queue reservation
for a future arriving class-1 packet. A class-1 arrival takes the place of the reservation in the queue, after which
a new reservation is created at the tail of the queue. Class-2 arrivals always take place at the end of the queue.
Past work on place reservation queues assumed independent and identically distributed transmission times for both
packet classes, either deterministically equal to one slot, geometrically distributed or with a general distribution.
In contrast, we consider heterogeneous service requirements with class-dependent transmission-time distributions
in our analysis. The key element in the analysis method for class-dependent transmission times is the use of a
new Markovian system state vector consisting of the total amount of work in the queue in front of the reservation
and the number of class-2 packets in the queue behind the reservation, at the beginning of a slot. Expressions are
obtained for the probability generating functions, the mean values and the tail probabilities of the packet delays
of both the delay-sensitive and the best-effort class. Numerical results illustrate that reservation-based scheduling
mitigates the problem of packet starvation as compared to absolute priority scheduling
Analysis of priority queues with session-based arrival streams
In this paper, we analyze a discrete-time priority queue with session-based arrivals. We consider a user population, where each user can start and end sessions. Sessions belong to one of two classes and generate a variable number of fixed-length packets which arrive to the queue at the rate of one packet per slot. The lengths of the sessions are generally distributed. Packets of the first class have transmission priority over the packets of the other class. The model is motivated by a web server handling delay-sensitive and delay-insensitive content. By using probability generating functions, some performance measures of the queue such as the moments of the packet delays of both classes are calculated. The impact of the priority scheduling discipline and of the session nature of the arrival process is shown by some numerical examples
The Impact of Queue Length Information on Buffer Overflow in Parallel Queues
We consider a system consisting of N parallel queues, served by one server. Time is slotted, and the server serves one of the queues in each time slot, according to some scheduling policy. We first characterize the exponent of the buffer overflow probability and the most likely overflow trajectories under the Longest Queue First (LQF) scheduling policy. Under statistically identical arrivals to each queue, we show that the buffer overflow exponents can be simply expressed in terms of the total system occupancy exponent of parallel queues, for some m ≤ N. We next turn our attention to the rate of queue length information needed to operate a scheduling policy, and its relationship to the buffer overflow exponents. It is known that queue length blind policies such as processor sharing and random scheduling perform worse than the queue aware LQF policy, when it comes to buffer overflow probability. However, we show that the overflow exponent of the LQF policy can be preserved with arbitrarily infrequent queue length updates.National Science Foundation (U.S.) (Grant CNS-0626781)National Science Foundation (U.S.) (Grant CNS0915988)United States. Army Research Office. Multidisciplinary University Research Initiativ
Achieving Optimal Throughput and Near-Optimal Asymptotic Delay Performance in Multi-Channel Wireless Networks with Low Complexity: A Practical Greedy Scheduling Policy
In this paper, we focus on the scheduling problem in multi-channel wireless
networks, e.g., the downlink of a single cell in fourth generation (4G)
OFDM-based cellular networks. Our goal is to design practical scheduling
policies that can achieve provably good performance in terms of both throughput
and delay, at a low complexity. While a class of -complexity
hybrid scheduling policies are recently developed to guarantee both
rate-function delay optimality (in the many-channel many-user asymptotic
regime) and throughput optimality (in the general non-asymptotic setting),
their practical complexity is typically high. To address this issue, we develop
a simple greedy policy called Delay-based Server-Side-Greedy (D-SSG) with a
\lower complexity , and rigorously prove that D-SSG not only achieves
throughput optimality, but also guarantees near-optimal asymptotic delay
performance. Specifically, we show that the rate-function attained by D-SSG for
any delay-violation threshold , is no smaller than the maximum achievable
rate-function by any scheduling policy for threshold . Thus, we are able
to achieve a reduction in complexity (from of the hybrid
policies to ) with a minimal drop in the delay performance. More
importantly, in practice, D-SSG generally has a substantially lower complexity
than the hybrid policies that typically have a large constant factor hidden in
the notation. Finally, we conduct numerical simulations to validate
our theoretical results in various scenarios. The simulation results show that
D-SSG not only guarantees a near-optimal rate-function, but also empirically is
virtually indistinguishable from delay-optimal policies.Comment: Accepted for publication by the IEEE/ACM Transactions on Networking,
February 2014. A preliminary version of this work was presented at IEEE
INFOCOM 2013, Turin, Italy, April 201
When Backpressure Meets Predictive Scheduling
Motivated by the increasing popularity of learning and predicting human user
behavior in communication and computing systems, in this paper, we investigate
the fundamental benefit of predictive scheduling, i.e., predicting and
pre-serving arrivals, in controlled queueing systems. Based on a lookahead
window prediction model, we first establish a novel equivalence between the
predictive queueing system with a \emph{fully-efficient} scheduling scheme and
an equivalent queueing system without prediction. This connection allows us to
analytically demonstrate that predictive scheduling necessarily improves system
delay performance and can drive it to zero with increasing prediction power. We
then propose the \textsf{Predictive Backpressure (PBP)} algorithm for achieving
optimal utility performance in such predictive systems. \textsf{PBP}
efficiently incorporates prediction into stochastic system control and avoids
the great complication due to the exponential state space growth in the
prediction window size. We show that \textsf{PBP} can achieve a utility
performance that is within of the optimal, for any ,
while guaranteeing that the system delay distribution is a
\emph{shifted-to-the-left} version of that under the original Backpressure
algorithm. Hence, the average packet delay under \textsf{PBP} is strictly
better than that under Backpressure, and vanishes with increasing prediction
window size. This implies that the resulting utility-delay tradeoff with
predictive scheduling beats the known optimal tradeoff for systems without prediction
- …