78 research outputs found
The effect of service time variability on maximum queue lengths in M^X/G/1 queues
We study the impact of service-time distributions on the distribution of the
maximum queue length during a busy period for the M^X/G/1 queue. The maximum
queue length is an important random variable to understand when designing the
buffer size for finite buffer (M/G/1/n) systems. We show the somewhat
surprising result that for three variations of the preemptive LCFS discipline,
the maximum queue length during a busy period is smaller when service times are
more variable (in the convex sense).Comment: 12 page
Resource allocation in grid computing
Grid computing, in which a network of computers is integrated to create a very fast virtual computer, is becoming ever more prevalent. Examples include the TeraGrid and Planet-lab.org, as well as applications on the existing Internet that take advantage of unused computing and storage capacity of idle desktop machines, such as Kazaa, SETI@home, Climateprediction.net, and Einstein@home. Grid computing permits a network of computers to act as a very fast virtual computer. With many alternative computers available, each with varying extra capacity, and each of which may connect or disconnect from the grid at any time, it may make sense to send the same task to more than one computer. The application can then use the output of whichever computer finishes the task first. Thus, the important issue of the dynamic assignment of tasks to individual computers is complicated in grid computing by the option of assigning multiple copies of the same task to different computers. We show that under fairly mild and often reasonable conditions, maximizing task replication stochastically maximizes the number of task completions by any time. That is, it is better to do the same task on as many computers as possible, rather than assigning different tasks to individual computers. We show maximal task replication is optimal when tasks have identical size and processing times have a NWU (New Worse than Used; defined later) distribution. Computers may be heterogeneous and their speeds may vary randomly, as is the case in grid computing environments. We also show that maximal task replication, along with a c ÎĽ rule, stochastically maximizes the successful task completion process when task processing times are exponential and depend on both the task and computer, and tasks have different probabilities of completing successfully
FCFS Parallel Service Systems and Matching Models
We consider three parallel service models in which customers of several types
are served by several types of servers subject to a bipartite compatibility
graph, and the service policy is first come first served. Two of the models
have a fixed set of servers. The first is a queueing model in which arriving
customers are assigned to the longest idling compatible server if available, or
else queue up in a single queue, and servers that become available pick the
longest waiting compatible customer, as studied by Adan and Weiss, 2014. The
second is a redundancy service model where arriving customers split into copies
that queue up at all the compatible servers, and are served in each queue on
FCFS basis, and leave the system when the first copy completes service, as
studied by Gardner et al., 2016. The third model is a matching queueing model
with a random stream of arriving servers. Arriving customers queue in a single
queue and arriving servers match with the first compatible customer and leave
immediately with the customer, or they leave without a customer. The last model
is relevant to organ transplants, to housing assignments, to adoptions and many
other situations.
We study the relations between these models, and show that they are closely
related to the FCFS infinite bipartite matching model, in which two infinite
sequences of customers and servers of several types are matched FCFS according
to a bipartite compatibility graph, as studied by Adan et al., 2017. We also
introduce a directed bipartite matching model in which we embed the queueing
systems. This leads to a generalization of Burke's theorem to parallel service
systems
Increasing Gambler's Ruin duration and Brownian Motion exit times
In Gambler's Ruin when both players start with the same amount of money, we
show the playing time stochastically increases when the games are made more
fair. We give two different arguments for this fact that extend results from
\cite{Pek2021}. We then use this to show that the exit time from a symmetric
interval for Brownian motion with drift stochastically increases as the drift
moves closer to zero; this result is not easily obtainable from available
explicit formulas for the density
The Impact of Cell Dropping Policies in ATM Networks
We consider policies for deciding which cells will be lost or dropped when losses occur at a finite buffer ATM node. The performance criteria of interest % particularly for voice traffiç are the delay of transmitted (non-lost) cells, the jitter (or variability in the delay of transmitted cells), and the burstiness of lost cells. We analyze the performance tradeoffs for various cell dropping policies. We show the usual the «rear dropping» in which cells that arrive to a full buffer are lost stochastically maximizes delay, while «front dropping,» in which cells at the front of the buffer are lost, stochastically minimizes delay. On the other hand, rear dropping stochastically minimizes the jitter. We also propose policies that have both stochastically smaller delay and less lost cell burstiness in a stochastic majorization sense than the rear dropping policy
Optimal Load Balancing on Distributed Homogeneous Unreliable Processors
We consider optimal load balancing in a distributed computing environment with several homogeneous unreliable processors that have limited state information. Each processor receives its own arrival process of tasks from outside users, some of which can be redirected to the other processors. Processing times are {\em iid} and arbitrarily distributed. The arrival process of outside tasks to each processor may be arbitrary as long as it is independent of the state of the system. Processors may fail, with arbitrary failure and repair processes that are also independent of the state of the system. The only information available to a processor is the history of its decisions for routing work to other processors, and the arrival times for its own arrival process. We show that the round-robin policy, in which each processor sends the tasks that can be redirected to each of the processors in turn, {\em stochastically} minimizes the task completion time for all , and minimizes response times and queue lengths in a separable increasing convex sense, among all policies that balance workload. We also show that if there is a single centralized controller, round-robin is the optimal policy, and a single controller using round-robin routing is better than the optimal distributed system, where «optimal» and «better» are in the sense of stochastically minimizing task completion times and minimizing response times and queue lengths in the separable increasing convex sense
EUROPEAN CONFERENCE ON QUEUEING THEORY 2016
International audienceThis booklet contains the proceedings of the second European Conference in Queueing Theory (ECQT) that was held from the 18th to the 20th of July 2016 at the engineering school ENSEEIHT, Toulouse, France. ECQT is a biannual event where scientists and technicians in queueing theory and related areas get together to promote research, encourage interaction and exchange ideas. The spirit of the conference is to be a queueing event organized from within Europe, but open to participants from all over the world. The technical program of the 2016 edition consisted of 112 presentations organized in 29 sessions covering all trends in queueing theory, including the development of the theory, methodology advances, computational aspects and applications. Another exciting feature of ECQT2016 was the institution of the Takács Award for outstanding PhD thesis on "Queueing Theory and its Applications"
Staffing decisions for heterogeneous workers with turnover
In this paper we consider a firm that employs heterogeneous workers to meet demand for its product or service. Workers differ in their skills, speed, and/or quality, and they randomly leave, or turn over. Each period the firm must decide how many workers of each type to hire or fire in order to meet randomly changing demand forecasts at minimal expense. When the number of workers of each type can by continuously varied, the operational cost is jointly convex in the number of workers of each type, hiring and firing costs are linear, and a random fraction of workers of each type leave in each period, the optimal policy has a simple hire- up-to/fire-down-to structure. However, under the more realistic assumption that the number of workers of each type is discrete, the optimal policy is much more difficult to characterize, and depends on the particular notion of discrete convexity used for the cost function. We explore several different notions of discrete convexity and their impact on structural results for the optimal policy.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45844/1/186_2005_Article_33.pd
- …