16,106 research outputs found

    Analysis and Computation of the Joint Queue Length Distribution in a FIFO Single-Server Queue with Multiple Batch Markovian Arrival Streams

    Full text link
    This paper considers a work-conserving FIFO single-server queue with multiple batch Markovian arrival streams governed by a continuous-time finite-state Markov chain. A particular feature of this queue is that service time distributions of customers may be different for different arrival streams. After briefly discussing the actual waiting time distributions of customers from respective arrival streams, we derive a formula for the vector generating function of the time-average joint queue length distribution in terms of the virtual waiting time distribution. Further assuming the discrete phase-type batch size distributions, we develop a numerically feasible procedure to compute the joint queue length distribution. Some numerical examples are provided also

    Computationally Efficient Simulation of Queues: The R Package queuecomputer

    Get PDF
    Large networks of queueing systems model important real-world systems such as MapReduce clusters, web-servers, hospitals, call centers and airport passenger terminals. To model such systems accurately, we must infer queueing parameters from data. Unfortunately, for many queueing networks there is no clear way to proceed with parameter inference from data. Approximate Bayesian computation could offer a straightforward way to infer parameters for such networks if we could simulate data quickly enough. We present a computationally efficient method for simulating from a very general set of queueing networks with the R package queuecomputer. Remarkable speedups of more than 2 orders of magnitude are observed relative to the popular DES packages simmer and simpy. We replicate output from these packages to validate the package. The package is modular and integrates well with the popular R package dplyr. Complex queueing networks with tandem, parallel and fork/join topologies can easily be built with these two packages together. We show how to use this package with two examples: a call center and an airport terminal.Comment: Updated for queuecomputer_0.8.

    Bayesian inference for queueing networks and modeling of internet services

    Get PDF
    Modern Internet services, such as those at Google, Yahoo!, and Amazon, handle billions of requests per day on clusters of thousands of computers. Because these services operate under strict performance requirements, a statistical understanding of their performance is of great practical interest. Such services are modeled by networks of queues, where each queue models one of the computers in the system. A key challenge is that the data are incomplete, because recording detailed information about every request to a heavily used system can require unacceptable overhead. In this paper we develop a Bayesian perspective on queueing models in which the arrival and departure times that are not observed are treated as latent variables. Underlying this viewpoint is the observation that a queueing model defines a deterministic transformation between the data and a set of independent variables called the service times. With this viewpoint in hand, we sample from the posterior distribution over missing data and model parameters using Markov chain Monte Carlo. We evaluate our framework on data from a benchmark Web application. We also present a simple technique for selection among nested queueing models. We are unaware of any previous work that considers inference in networks of queues in the presence of missing data.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS392 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Many-server queues with customer abandonment: numerical analysis of their diffusion models

    Full text link
    We use multidimensional diffusion processes to approximate the dynamics of a queue served by many parallel servers. The queue is served in the first-in-first-out (FIFO) order and the customers waiting in queue may abandon the system without service. Two diffusion models are proposed in this paper. They differ in how the patience time distribution is built into them. The first diffusion model uses the patience time density at zero and the second one uses the entire patience time distribution. To analyze these diffusion models, we develop a numerical algorithm for computing the stationary distribution of such a diffusion process. A crucial part of the algorithm is to choose an appropriate reference density. Using a conjecture on the tail behavior of a limit queue length process, we propose a systematic approach to constructing a reference density. With the proposed reference density, the algorithm is shown to converge quickly in numerical experiments. These experiments also show that the diffusion models are good approximations for many-server queues, sometimes for queues with as few as twenty servers

    Metascheduling of HPC Jobs in Day-Ahead Electricity Markets

    Full text link
    High performance grid computing is a key enabler of large scale collaborative computational science. With the promise of exascale computing, high performance grid systems are expected to incur electricity bills that grow super-linearly over time. In order to achieve cost effectiveness in these systems, it is essential for the scheduling algorithms to exploit electricity price variations, both in space and time, that are prevalent in the dynamic electricity price markets. In this paper, we present a metascheduling algorithm to optimize the placement of jobs in a compute grid which consumes electricity from the day-ahead wholesale market. We formulate the scheduling problem as a Minimum Cost Maximum Flow problem and leverage queue waiting time and electricity price predictions to accurately estimate the cost of job execution at a system. Using trace based simulation with real and synthetic workload traces, and real electricity price data sets, we demonstrate our approach on two currently operational grids, XSEDE and NorduGrid. Our experimental setup collectively constitute more than 433K processors spread across 58 compute systems in 17 geographically distributed locations. Experiments show that our approach simultaneously optimizes the total electricity cost and the average response time of the grid, without being unfair to users of the local batch systems.Comment: Appears in IEEE Transactions on Parallel and Distributed System

    Scheduling with Predictions and the Price of Misprediction

    Get PDF
    In many traditional job scheduling settings, it is assumed that one knows the time it will take for a job to complete service. In such cases, strategies such as shortest job first can be used to improve performance in terms of measures such as the average time a job waits in the system. We consider the setting where the service time is not known, but is predicted by for example a machine learning algorithm. Our main result is the derivation, under natural assumptions, of formulae for the performance of several strategies for queueing systems that use predictions for service times in order to schedule jobs. As part of our analysis, we suggest the framework of the "price of misprediction," which offers a measure of the cost of using predicted information

    Inference for double Pareto lognormal queues with applications

    Get PDF
    In this article we describe a method for carrying out Bayesian inference for the double Pareto lognormal (dPlN) distribution which has recently been proposed as a model for heavy-tailed phenomena. We apply our approach to inference for the dPlN/M/1 and M/dPlN/1 queueing systems. These systems cannot be analyzed using standard techniques due to the fact that the dPlN distribution does not posses a Laplace transform in closed form. This difficulty is overcome using some recent approximations for the Laplace transform for the Pareto/M/1 system. Our procedure is illustrated with applications in internet traffic analysis and risk theory
    • …
    corecore