2,486 research outputs found

    A tight bound on the throughput of queueing networks with blocking

    Get PDF
    In this paper, we present a bounding methodology that allows to compute a tight lower bound on the cycle time of fork--join queueing networks with blocking and with general service time distributions. The methodology relies on two ideas. First, probability masses fitting (PMF) discretizes the service time distributions so that the evolution of the modified network can be modelled by a Markov chain. The PMF discretization is simple: the probability masses on regular intervals are computed and aggregated on a single value in the orresponding interval. Second, we take advantage of the concept of critical path, i.e. the sequence of jobs that covers a sample run. We show that the critical path can be computed with the discretized distributions and that the same sequence of jobs offers a lower bound on the original cycle time. The tightness of the bound is shown on computational experiments. Finally, we discuss the extension to split--and--merge networks and approximate estimations of the cycle time.queueing networks, blocking, throughput, bound, probability masses fitting, critical path.

    Correction. Brownian models of open processing networks: canonical representation of workload

    Full text link
    Due to a printing error the above mentioned article [Annals of Applied Probability 10 (2000) 75--103, doi:10.1214/aoap/1019737665] had numerous equations appearing incorrectly in the print version of this paper. The entire article follows as it should have appeared. IMS apologizes to the author and the readers for this error. A recent paper by Harrison and Van Mieghem explained in general mathematical terms how one forms an ``equivalent workload formulation'' of a Brownian network model. Denoting by Z(t)Z(t) the state vector of the original Brownian network, one has a lower dimensional state descriptor W(t)=MZ(t)W(t)=MZ(t) in the equivalent workload formulation, where MM can be chosen as any basis matrix for a particular linear space. This paper considers Brownian models for a very general class of open processing networks, and in that context develops a more extensive interpretation of the equivalent workload formulation, thus extending earlier work by Laws on alternate routing problems. A linear program called the static planning problem is introduced to articulate the notion of ``heavy traffic'' for a general open network, and the dual of that linear program is used to define a canonical choice of the basis matrix MM. To be specific, rows of the canonical MM are alternative basic optimal solutions of the dual linear program. If the network data satisfy a natural monotonicity condition, the canonical matrix MM is shown to be nonnegative, and another natural condition is identified which ensures that MM admits a factorization related to the notion of resource pooling.Comment: Published at http://dx.doi.org/10.1214/105051606000000583 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Asymptotic optimality of maximum pressure policies in stochastic processing networks

    Full text link
    We consider a class of stochastic processing networks. Assume that the networks satisfy a complete resource pooling condition. We prove that each maximum pressure policy asymptotically minimizes the workload process in a stochastic processing network in heavy traffic. We also show that, under each quadratic holding cost structure, there is a maximum pressure policy that asymptotically minimizes the holding cost. A key to the optimality proofs is to prove a state space collapse result and a heavy traffic limit theorem for the network processes under a maximum pressure policy. We extend a framework of Bramson [Queueing Systems Theory Appl. 30 (1998) 89--148] and Williams [Queueing Systems Theory Appl. 30 (1998b) 5--25] from the multiclass queueing network setting to the stochastic processing network setting to prove the state space collapse result and the heavy traffic limit theorem. The extension can be adapted to other studies of stochastic processing networks.Comment: Published in at http://dx.doi.org/10.1214/08-AAP522 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Capacity Planning and Leadtime management

    Get PDF
    In this paper we discuss a framework for capacity planning and lead time management in manufacturing companies, with an emphasis on the machine shop. First we show how queueing models can be used to find approximations of the mean and the variance of manufacturing shop lead times. These quantities often serve as a basis to set a fixed planned lead time in an MRP-controlled environment. A major drawback of a fixed planned lead time is the ignorance of the correlation between actual work loads and the lead times that can be realized under a limited capacity flexibility. To overcome this problem, we develop a method that determines the earliest possible completion time of any arriving job, without sacrificing the delivery performance of any other job in the shop. This earliest completion time is then taken to be the delivery date and thereby determines a workload-dependent planned lead time. We compare this capacity planning procedure with a fixed planned lead time approach (as in MRP), with a procedure in which lead times are estimated based on the amount of work in the shop, and with a workload-oriented release procedure. Numerical experiments so far show an excellent performance of the capacity planning procedure

    Performance Bounds for Scheduling Queueing Networks

    Get PDF
    The goal of this paper is to assess the improvement in performance that might' be achieved by optimally scheduling a multiclass open queueing network. A stochastic process is defined whose steady-state mean value is less than or equal to the mean number of customers in a queueing network under any arbitrary scheduling policy. Thus, this process offers a lower bound on performance when the objective of the queueing network scheduling problem is to minimize the mean number of customers in the network. Since this bound is easily obtained from a computer simulation model of a queueing network, its main use is to aid job-shop schedulers in determining how much further improvement (relative to their proposed policies) might be achievable from scheduling. Through computational examples, we identify some factors that affect the tightness of the bound

    Load Balancing in the Non-Degenerate Slowdown Regime

    Full text link
    We analyse Join-the-Shortest-Queue in a contemporary scaling regime known as the Non-Degenerate Slowdown regime. Join-the-Shortest-Queue (JSQ) is a classical load balancing policy for queueing systems with multiple parallel servers. Parallel server queueing systems are regularly analysed and dimensioned by diffusion approximations achieved in the Halfin-Whitt scaling regime. However, when jobs must be dispatched to a server upon arrival, we advocate the Non-Degenerate Slowdown regime (NDS) to compare different load-balancing rules. In this paper we identify novel diffusion approximation and timescale separation that provides insights into the performance of JSQ. We calculate the price of irrevocably dispatching jobs to servers and prove this to within 15% (in the NDS regime) of the rules that may manoeuvre jobs between servers. We also compare ours results for the JSQ policy with the NDS approximations of many modern load balancing policies such as Idle-Queue-First and Power-of-dd-choices policies which act as low information proxies for the JSQ policy. Our analysis leads us to construct new rules that have identical performance to JSQ but require less communication overhead than power-of-2-choices.Comment: Revised journal submission versio

    Configuration of Distributed Message Converter Systems using Performance Modeling

    Get PDF
    To find a configuration of a distributed system satisfying performance goals is a complex search problem that involves many design parameters, like hardware selection, job distribution and process configuration. Performance models are a powerful tools to analyse potential system configurations, however, their evaluation is expensive, such that only a limited number of possible configurations can be evaluated. In this paper we present a systematic method to find a satisfactory configuration with feasible effort, based on a two-step approach. First, using performance estimates a hardware configuration is determined and then the software configuration is incrementally optimized evaluating Layered Queueing Network models. We applied this method to the design of performant EDI converter systems in the financial domain, where increasing message volumes need to be handled due to the increasing importance of B2B interaction
    corecore