173 research outputs found

    Asymptotic optimality of maximum pressure policies in stochastic processing networks

    Full text link
    We consider a class of stochastic processing networks. Assume that the networks satisfy a complete resource pooling condition. We prove that each maximum pressure policy asymptotically minimizes the workload process in a stochastic processing network in heavy traffic. We also show that, under each quadratic holding cost structure, there is a maximum pressure policy that asymptotically minimizes the holding cost. A key to the optimality proofs is to prove a state space collapse result and a heavy traffic limit theorem for the network processes under a maximum pressure policy. We extend a framework of Bramson [Queueing Systems Theory Appl. 30 (1998) 89--148] and Williams [Queueing Systems Theory Appl. 30 (1998b) 5--25] from the multiclass queueing network setting to the stochastic processing network setting to prove the state space collapse result and the heavy traffic limit theorem. The extension can be adapted to other studies of stochastic processing networks.Comment: Published in at http://dx.doi.org/10.1214/08-AAP522 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Simple, Practical Prioritization Scheme for a Job Shop Processing Multiple Job Types

    Get PDF
    The maintenance, repair, and overhaul (MRO) process is used to recondition equipment in the railroad, off-shore drilling, aircraft, and shipping industries. In the typical MRO process, the equipment is disassembled into component parts and these parts are routed to back-shops for repair. Repaired parts are returned for reassembling the equipment. Scheduling the back-shop for smooth flow often requires prioritizing the repair of component parts from different original assemblies at different machines. To enable such prioritization, we model the back-shop as a multi-class queueing network with a ConWIP execution system and introduce a new priority scheme to maximize the system performance. In this scheme, we identify the bottleneck machine based on overall workload and classify machines into two categories: the bottleneck machine and the non-bottleneck machine(s). Assemblies with the lowest cycle time receive the highest priority on the bottleneck machine and the lowest priority on non-bottleneck machine(s). Our experimental results show that this priority scheme increases the system performance by lowering the average cycle times without adversely impacting the total throughput. The contribution of this thesis consists primarily of three parts. First, we develop a simple priority scheme for multi-class, multi-server, ConWIP queueing systems with the disassembly/reassembly feature so that schedulers for a job-shop environment would be able to know which part should be given priority, in what order and where. Next, we provide an exact analytical solution to a two-class, two-server closed queueing model with mixed non-preemptive priority scheme. The queueing network model we study has not been analyzed in the literature, and there are no existing models that address the underlying problem of deciding prioritization by job types to maximize the system performance. Finally, we explore conditions under which the non-preemptive priority discipline can be approximated by a preemptive priority discipline

    Separation of timescales in a two-layered network

    Full text link
    We investigate a computer network consisting of two layers occurring in, for example, application servers. The first layer incorporates the arrival of jobs at a network of multi-server nodes, which we model as a many-server Jackson network. At the second layer, active servers at these nodes act now as customers who are served by a common CPU. Our main result shows a separation of time scales in heavy traffic: the main source of randomness occurs at the (aggregate) CPU layer; the interactions between different types of nodes at the other layer is shown to converge to a fixed point at a faster time scale; this also yields a state-space collapse property. Apart from these fundamental insights, we also obtain an explicit approximation for the joint law of the number of jobs in the system, which is provably accurate for heavily loaded systems and performs numerically well for moderately loaded systems. The obtained results for the model under consideration can be applied to thread-pool dimensioning in application servers, while the technique seems applicable to other layered systems too.Comment: 8 pages, 2 figures, 1 table, ITC 24 (2012

    A cross-layer approach for WLAN voice capacity planning

    Full text link

    Proportional switching in FIFO networks

    Get PDF
    We consider a family of discrete time multihop switched queueing networks where each packet movesalong a xed route. In this setting, BackPressure is the canonical choice of scheduling policy; this policy hasthe virtues of possessing a maximal stability region and not requiring explicit knowledge of tra c arrival rates.BackPressure has certain structural weaknesses because implementation requires information about each route,and queueing delays can grow super-linearly with route length. For large networks, where packets over manyroutes are processed by a queue, or where packets over a route are processed by many queues, these limitationscan be prohibitive.In this article, we introduce a scheduling policy for FIFO networks, the Proportional Scheduler, which isbased on the proportional fairness criterion. We show that, like BackPressure, the Proportional Scheduler hasa maximal stability region and does not require explicit knowledge of tra c arrival rates. The ProportionalScheduler has the advantage that information about the network's route structure is not required for scheduling,which substantially improves the policy's performance for large networks. For instance, packets can be routedwith only next-hop information and new nodes can be added to the network with only knowledge of thescheduling constraintsThe research of the rst author was partially supported by NSF grants DMS-1105668 and DMS-1203201. The research of the second author was partially supported by the Spanish Ministry of Economy and Competitiveness Grants MTM2013-42104-P via FEDER funds; he thanks the ICMAT (Madrid, Spain) Research Institute that kindly hosted him while developing this project

    Proportional Switching in First-in, First-out Networks

    Get PDF
    We consider a family of discrete time multihop switched queueing networks where each packet moves along a fixed route. In this setting, BackPressure is the canonical choice of scheduling policy; this policy has the virtues of possessing a maximal stability region and not requiring explicit knowledge of traffic arrival rates. BackPressure has certain structural weaknesses because implementation requires information about each route, and queueing delays can grow super-linearly with route length. For large networks, where packets over many routes are processed by a queue, or where packets over a route are processed by many queues, these limitations can be prohibitive. In this article, we introduce a scheduling policy for first-in, first-out networks, the ProportionalScheduler, which is based on the proportional fairness criterion. We show that, like BackPressure, the ProportionalScheduler has a maximal stability region and does not require explicit knowledge of traffic arrival rates. The ProportionalScheduler has the advantage that information about the network's route structure is not required for scheduling, which substantially improves the policy's performance for large networks. For instance, packets can be routed with only next-hop information and new nodes can be added to the network with only knowledge of the scheduling constraints

    Learning Queuing Networks by Recurrent Neural Networks

    Full text link
    It is well known that building analytical performance models in practice is difficult because it requires a considerable degree of proficiency in the underlying mathematics. In this paper, we propose a machine-learning approach to derive performance models from data. We focus on queuing networks, and crucially exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations. We encode these equations into a recurrent neural network whose weights can be directly related to model parameters. This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model that can be used for prediction purposes such as what-if analyses and capacity planning. Using synthetic models as well as a real case study of a load-balancing system, we show the effectiveness of our technique in yielding models with high predictive power
    • 

    corecore