7,423 research outputs found
Stochastic decomposition in discrete-time queues with generalized vacations and applications
For several specific queueing models with a vacation policy, the stationary system occupancy at the beginning of a rantdom slot is distributed as the sum of two independent random variables. One of these variables is the stationary number of customers in an equivalent queueing system with no vacations. For models in continuous time with Poissonian arrivals, this result is well-known, and referred to as stochastic decomposition, with proof provided by Fuhrmann and Cooper. For models in discrete time, this result received less attention, with no proof available to date. In this paper, we first establish a proof of the decomposition result in discrete time. When compared to the proof in continuous time, conditions for the proof in discrete time are somewhat more general. Second, we explore four different examples: non-preemptive proirity systems, slot-bound priority systems, polling systems, and fiber delay line (FDL) buffer systems. The first two examples are known results from literature that are given here as an illustration. The third is a new example, and the last one (FDL buffer systems) shows new results. It is shown that in some cases the queueing analysis can be considerably simplified using this property
Density profiles of the exclusive queueing process
The exclusive queueing process (EQP) incorporates the exclusion principle
into classic queueing models. It can be interpreted as an exclusion process of
variable system length. Here we extend previous studies of its phase diagram by
identifying subphases which can be distinguished by the number of plateaus in
the density profiles. Furthermore the influence of different update procedures
(parallel, backward-ordered, continuous time) is determined
Loss systems in a random environment
We consider a single server system with infinite waiting room in a random
environment. The service system and the environment interact in both
directions. Whenever the environment enters a prespecified subset of its state
space the service process is completely blocked: Service is interrupted and
newly arriving customers are lost. We prove an if-and-only-if-condition for a
product form steady state distribution of the joint queueing-environment
process. A consequence is a strong insensitivity property for such systems.
We discuss several applications, e.g. from inventory theory and reliability
theory, and show that our result extends and generalizes several theorems found
in the literature, e.g. of queueing-inventory processes.
We investigate further classical loss systems, where due to finite waiting
room loss of customers occurs. In connection with loss of customers due to
blocking by the environment and service interruptions new phenomena arise.
We further investigate the embedded Markov chains at departure epochs and
show that the behaviour of the embedded Markov chain is often considerably
different from that of the continuous time Markov process. This is different
from the behaviour of the standard M/G/1, where the steady state of the
embedded Markov chain and the continuous time process coincide.
For exponential queueing systems we show that there is a product form
equilibrium of the embedded Markov chain under rather general conditions. For
systems with non-exponential service times more restrictive constraints are
needed, which we prove by a counter example where the environment represents an
inventory attached to an M/D/1 queue. Such integrated queueing-inventory
systems are dealt with in the literature previously, and are revisited here in
detail
The Network Effects of Prefetching
Prefetching has been shown to be an effective technique for reducing user perceived latency in distributed systems. In this paper we show that even when prefetching adds no extra traffic to the network, it can have serious negative performance effects. Straightforward approaches to prefetching increase the burstiness of individual sources, leading to increased average queue sizes in network switches. However, we also show that applications can avoid the undesirable queueing effects of prefetching. In fact, we show that applications employing prefetching can significantly improve network performance, to a level much better than that obtained without any prefetching at all. This is because prefetching offers increased opportunities for traffic shaping that are not available in the absence of prefetching. Using a simple transport rate control mechanism, a prefetching application can modify its behavior from a distinctly ON/OFF entity to one whose data transfer rate changes less abruptly, while still delivering all data in advance of the user's actual requests
- …