377 research outputs found

    EUROPEAN CONFERENCE ON QUEUEING THEORY 2016

    Get PDF
    International audienceThis booklet contains the proceedings of the second European Conference in Queueing Theory (ECQT) that was held from the 18th to the 20th of July 2016 at the engineering school ENSEEIHT, Toulouse, France. ECQT is a biannual event where scientists and technicians in queueing theory and related areas get together to promote research, encourage interaction and exchange ideas. The spirit of the conference is to be a queueing event organized from within Europe, but open to participants from all over the world. The technical program of the 2016 edition consisted of 112 presentations organized in 29 sessions covering all trends in queueing theory, including the development of the theory, methodology advances, computational aspects and applications. Another exciting feature of ECQT2016 was the institution of the Takács Award for outstanding PhD thesis on "Queueing Theory and its Applications"

    Discrete-time queues with zero-regenerative arrivals: moments and examples

    Get PDF
    In this paper we investigate a single-server discrete-time queueing system with single-slot service times. The stationary ergodic arrival process this queueing system is subject to, satisfies a regeneration property when there are no arrivals during a slot. Expressions for the mean and the variance of the queue content in steady state are obtained for this broad class which includes among others autoregressive arrival processes and M/G/infinity-input or train arrival processes. To illustrate our results, we then consider a number of numerical examples

    Performance analysis of priority queueing systems in discrete time

    Get PDF
    The integration of different types of traffic in packet-based networks spawns the need for traffic differentiation. In this tutorial paper, we present some analytical techniques to tackle discrete-time queueing systems with priority scheduling. We investigate both preemptive (resume and repeat) and non-preemptive priority scheduling disciplines. Two classes of traffic are considered, high-priority and low-priority traffic, which both generate variable-length packets. A probability generating functions approach leads to performance measures such as moments of system contents and packet delays of both classes

    Numerical methods for queues with shared service

    Get PDF
    A queueing system is a mathematical abstraction of a situation where elements, called customers, arrive in a system and wait until they receive some kind of service. Queueing systems are omnipresent in real life. Prime examples include people waiting at a counter to be served, airplanes waiting to take off, traffic jams during rush hour etc. Queueing theory is the mathematical study of queueing phenomena. As often neither the arrival instants of the customers nor their service times are known in advance, queueing theory most often assumes that these processes are random variables. The queueing process itself is then a stochastic process and most often also a Markov process, provided a proper description of the state of the queueing process is introduced. This dissertation investigates numerical methods for a particular type of Markovian queueing systems, namely queueing systems with shared service. These queueing systems differ from traditional queueing systems in that there is simultaneous service of the head-of-line customers of all queues and in that there is no service if there are no customers in one of the queues. The absence of service whenever one of the queues is empty yields particular dynamics which are not found in traditional queueing systems. These queueing systems with shared service are not only beautiful mathematical objects in their own right, but are also motivated by an extensive range of applications. The original motivation for studying queueing systems with shared service came from a particular process in inventory management called kitting. A kitting process collects the necessary parts for an end product in a box prior to sending it to the assembly area. The parts and their inventories being the customers and queues, we get ``shared service'' as kitting cannot proceed if some parts are absent. Still in the area of inventory management, the decoupling inventory of a hybrid make-to-stock/make-to-order system exhibits shared service. The production process prior to the decoupling inventory is make-to-stock and driven by demand forecasts. In contrast, the production process after the decoupling inventory is make-to-order and driven by actual demand as items from the decoupling inventory are customised according to customer specifications. At the decoupling point, the decoupling inventory is complemented with a queue of outstanding orders. As customisation only starts when the decoupling inventory is nonempty and there is at least one order, there is again shared service. Moving to applications in telecommunications, shared service applies to energy harvesting sensor nodes. Such a sensor node scavenges energy from its environment to meet its energy expenditure or to prolong its lifetime. A rechargeable battery operates very much like a queue, customers being discretised as chunks of energy. As a sensor node requires both sensed data and energy for transmission, shared service can again be identified. In the Markovian framework, "solving" a queueing system corresponds to finding the steady-state solution of the Markov process that describes the queueing system at hand. Indeed, most performance measures of interest of the queueing system can be expressed in terms of the steady-state solution of the underlying Markov process. For a finite ergodic Markov process, the steady-state solution is the unique solution of N1N-1 balance equations complemented with the normalisation condition, NN being the size of the state space. For the queueing systems with shared service, the size of the state space of the Markov processes grows exponentially with the number of queues involved. Hence, even if only a moderate number of queues are considered, the size of the state space is huge. This is the state-space explosion problem. As direct solution methods for such Markov processes are computationally infeasible, this dissertation aims at exploiting structural properties of the Markov processes, as to speed up computation of the steady-state solution. The first property that can be exploited is sparsity of the generator matrix of the Markov process. Indeed, the number of events that can occur in any state --- or equivalently, the number of transitions to other states --- is far smaller than the size of the state space. This means that the generator matrix of the Markov process is mainly filled with zeroes. Iterative methods for sparse linear systems --- in particular the Krylov subspace solver GMRES --- were found to be computationally efficient for studying kitting processes only if the number of queues is limited. For more queues (or a larger state space), the methods cannot calculate the steady-state performance measures sufficiently fast. The applications related to the decoupling inventory and the energy harvesting sensor node involve only two queues. In this case, the generator matrix exhibits a homogene block-tridiagonal structure. Such Markov processes can be solved efficiently by means of matrix-geometric methods, both in the case that the process has finite size and --- even more efficiently --- in the case that it has an infinite size and a finite block size. Neither of the former exact solution methods allows for investigating systems with many queues. Therefore we developed an approximate numerical solution method, based on Maclaurin series expansions. Rather than focussing on structural properties of the Markov process for any parameter setting, the series expansion technique exploits structural properties of the Markov process when some parameter is sent to zero. For the queues with shared exponential service and the service rate sent to zero, the resulting process has a single absorbing state and the states can be ordered such that the generator matrix is upper-diagonal. In this case, the solution at zero is trivial and the calculation of the higher order terms in the series expansion around zero has a computational complexity proportional to the size of the state space. This is a case of regular perturbation of the parameter and contrasts to singular perturbation which is applied when the service times of the kitting process are phase-type distributed. For singular perturbation, the Markov process has no unique steady-state solution when the parameter is sent to zero. However, similar techniques still apply, albeit at a higher computational cost. Finally we note that the numerical series expansion technique is not limited to evaluating queues with shared service. Resembling shared queueing systems in that a Markov process with multidimensional state space is considered, it is shown that the regular series expansion technique can be applied on an epidemic model for opinion propagation in a social network. Interestingly, we find that the series expansion technique complements the usual fluid approach of the epidemic literature

    Analysis of a discrete-time single-server queue with an occasional extra server

    Get PDF
    We consider a discrete-time queueing system having two distinct servers: one server, the "regular" server, is permanently available, while the second server, referred to as the "extra" server, is only allocated to the system intermittently. Apart from their availability, the two servers are identical, in the sense that the customers have deterministic service times equal to 1 fixed-length time slot each, regardless of the server that processes them. In this paper, we assume that the extra server is available during random "up-periods", whereas it is unavailable during random "down-periods". Up-periods and down-periods occur alternately on the time axis. The up-periods have geometrically distributed lengths (expressed in time slots), whereas the distribution of the lengths of the down-periods is general, at least in the first instance. Customers enter the system according to a general independent arrival process, i.e., the numbers of arrivals during consecutive time slots are i.i.d. random variables with arbitrary distribution. For this queueing model, we are able to derive closed-form expressions for the steady-state probability generating functions (pgfs) and the expected values of the numbers of customers in the system at various observation epochs, such as the start of an up-period, the start of a down-period and the beginning of an arbitrary time slot. At first sight, these formulas, however, appear to contain an infinite number of unknown constants. One major issue of the mathematical analysis turns out to be the determination of these constants. In the paper, we show that restricting the pgf of the down-periods to be a rational function of its argument, brings about the crucial simplification that the original infinite number of unknown constants appearing in the formulas can be expressed in terms of a finite number of independent unknowns. The latter can then be adequately determined based on the bounded nature of pgfs inside the complex unit disk, and an extensive use of properties of polynomials. Various special cases, both from the perspective of the arrival distribution and the down-period distribution, are discussed. The results are also illustrated by means of relevant numerical examples. Possible applications of this type of queueing model are numerous: the extra server could be the regular server of another similar queue, helping whenever an idle period occurs in its own queue; a geometric distribution for these idle times is then a very natural modeling assumption. A typical example would be the situation at the check-in counter at a gate in an airport: the regular server serves customers with a low-fare ticket, while the extra server gives priority to the business-class and first-class customers, but helps checking regular customers, whenever the priority line is empty. (C) 2017 Elsevier B.V. All rights reserved

    Performance analysis of networks on chips

    Get PDF
    Modules on a chip (such as processors and memories) are traditionally connected through a single link, called a bus. As chips become more complex and the number of modules on a chip increases, this connection method becomes inefficient because the bus can only be used by one module at a time. Networks on chips are an emerging technology for the connection of on-chip modules. In networks on chips, switches are used to transmit data from one module to another, which entails that multiple links can be used simultaneously so that communication is more efficient. Switches consist of a number of input ports to which data arrives and output ports from which data leaves. If data at multiple input ports has to be transmitted to the same output port, only one input port may actually transmit its data, which may lead to congestion. Queueing theory deals with the analysis of congestion phenomena caused by competition for service facilities with scarce resources. Such phenomena occur, for example, in traffic intersections, manufacturing systems, and communication networks like networks on chips. These congestion phenomena are typically analysed using stochastic models, which capture the uncertain and unpredictable nature of processes leading to congestion (such as irregular car arrivals to a traffic intersection). Stochastic models are useful tools for the analysis of networks on chips as well, due to the complexity of data traffic on these networks. In this thesis, we therefore study queueing models aimed at networks on chips. The thesis is centred around two key models: A model of a switch in isolation, the so-called single-switch model, and a model of a network of switches where all traffic has the same destination, the so-called network of polling stations. For both models we are interested in the throughput (the amount of data transmitted per time unit) and the mean delay (the time it takes data to travel across the network). Single-switch models are often studied under the assumption that the number of ports tends to infinity and that traffic is uniform (i.e., on average equally many packets arrive to all buffers, and all possible destinations are equally likely). In networks on chips, however, the number of buffers is typically small. We introduce a new approximation specifically aimed at small switches with (memoryless) Bernoulli arrivals. We show that, for such switches, this approximation is more accurate than currently known approximations. As traffic in networks on chips is usually non-uniform, we also extend our approximation to non-uniform switches. The key difference between uniform and nonuniform switches is that in non-uniform switches, all queues have a different maximum throughput. We obtain a very accurate approximation of this throughput, which allows us to extend the mean delay approximation. The extended approximation is derived for Bernoulli arrivals and correlated arrival processes. Its accuracy is verified through a comparison with simulation results. The second key model is that of concentrating tree networks of polling stations (polling stations are essentially switches where all traffic has the same output port as destination). Single polling stations have been studied extensively in literature, but only few attempts have been made to analyse networks of polling stations. We establish a reduction theorem that states that networks of polling stations can be reduced to single polling stations while preserving some information on mean waiting times. This reduction theorem holds under the assumption that the last node of the network uses a so-called HoL-based service discipline, which means that the choice to transmit data from a certain buffer may only depend on which buffers are empty, but not on the amount of data in the buffers. The reduction theorem is a key tool for the analysis of networks of polling stations. In addition to this, mean waiting times in single polling stations have to be calculated, either exactly or approximately. To this end, known results can be used, but we also devise a new single-station approximation that can be used for a large subclass of HoL-based service disciplines. Finally, networks on chips typically implement flow control, which is a mechanism that limits the amount of data in the network from one source. We analyse the division of throughput over several sources in a network of polling stations with flow control. Our results indicate that the throughput in such a network is determined by an interaction between buffer sizes, flow control limits, and service disciplines. This interaction is studied in more detail by means of a numerical analysis

    Teletraffic analysis of ATM systems : symposium gehouden aan de Technische Universiteit Eindhoven op 15 februari 1993

    Get PDF

    Analysis of discrete-time queueing systems with vacations

    Get PDF
    corecore