146 research outputs found

    On the Improvement From Scheduling a Two-Station Queueing Network in Heavy Traffic

    Get PDF
    For a two-station multiclass queueing network in heavy traffic, we assess the improvement from scheduling (job release and priority sequencing) that can occur relative to Poisson input and first-come first-served (FCFS) sequencing. In particular, simple upper bounds are derived on the optimal objective function value (found in Wein 1989a) of a Brownian control problem that approximates (via Harrison's 1988 model) a two-station queueing network scheduling problem in heavy traffic. When the system is perfectly balanced, the Brownian analysis predicts that optimal scheduling will reduce the long run expected average number of customers in the network by at least a factor of four relative to the Poisson input, FCFS sequencing policy that achieves the same throughput rate. When the system is not perfectly balanced, the corresponding factor is slightly smaller than two

    Performance Bounds for Scheduling Queueing Networks

    Get PDF
    The goal of this paper is to assess the improvement in performance that might' be achieved by optimally scheduling a multiclass open queueing network. A stochastic process is defined whose steady-state mean value is less than or equal to the mean number of customers in a queueing network under any arbitrary scheduling policy. Thus, this process offers a lower bound on performance when the objective of the queueing network scheduling problem is to minimize the mean number of customers in the network. Since this bound is easily obtained from a computer simulation model of a queueing network, its main use is to aid job-shop schedulers in determining how much further improvement (relative to their proposed policies) might be achievable from scheduling. Through computational examples, we identify some factors that affect the tightness of the bound

    The Impact of Processing Time Knowledge on Dynamic Job-Shop Scheduling

    Get PDF
    The goal of this paper is to determine if the results for dynamic job-shop scheduling problems are affected by the assumptions made with regard to the processing time distributions and the scheduler's knowledge of the processing times. Three dynamic jobshop scheduling problems (including a two station version of Conway et al.'s [2] nine station symmetric shop) are tested under seven different scenarios, one deterministic and six stochastic, using computer simulation. The deterministic scenario, where the processing times are exponential and observed by the scheduler, has been considered in many simulation studies, including Conway et al's. The six stochastic scenarios include the case where the processing times are exponential and only the mean is known to the scheduler, and five different cases where the machines are subject to unpredictable failures. Two policies were tested, the shortest expected processing time (SEPT) rule, and a rule derived from a Brownian analysis of the corresponding queueing network scheduling problem. Although the SEPT rule performed well in the deterministic scenario, it was easily outperformed by the Brownian policies in the six stochastic scenarios for all three problems. Thus, the results from simulation studies of dynamic, deterministic job-shop scheduling problems do not necessarily carry over to the more realistic setting where there is unpredictable variability present

    Dynamic Scheduling of a Production/Inventory System with By-Products and Random Yield

    Get PDF
    Motivated by semiconductor wafer fabrication, we consider a scheduling problem for a single-server multiclass queue. A single workstation fabricates semiconductor wafers according to a variety of different processes, where each process consists of multiple stages of service with a different general service time distribution at each stage. A batch (or lot) of wafers produced according to a particular process randomly yields chips of many different product types, and completed chips of each type enter a finished goods inventory that services exogenous customer demand for that type. The scheduling problem is to dynamically decide whether the server should be idle or working, and in the latter case, to decide which stage of which process type to serve next. The objective is to minimize the long run expected average cost, which includes costs for holding work-in-process inventory(which may differ by process type and service stage) and backordering and holding finished goods inventory (which may differ by product type). We assume the workstation must be busy the great majority of the time in order to satisfy customer demand, and approximate the scheduling problem by a control problem involving Brownian motion. A scheduling policy is derived by interpreting the exact solution to the Brownian control problem in terms of the production/inventory system. The proposed dynamic scheduling policy takes a relatively simple form and appears to be effective in numerical studies

    Monotone Control of Queueing and Production/Inventory Systems

    Get PDF
    Weber and Stidham (1987) used submodularity to establish transition monotonicity (a service completion at one station cannot reduce the service rate at another station) for Markovian queueing networks that meet certain regularity conditions and are controlled to minimize service and queueing costs. We give an extension of monotonicity to other directions in the state space, such as arrival transitions, and to arrival routing problems. The conditions used to establish monotonicity, which deal with the boundary of the state space, are easily verified for many queueing systems. We also show that, without service costs, transition-monotone controls can be described by simple control regions and switching functions, extending earlier results. The theory is applied to production/inventory systems with holding costs at each stage and finished goods backorder costs

    Scheduling Networks of Queues: Heavy Traffic Analysis of a Multistation Closed Network

    Get PDF
    We consider the problem of finding an optimal dynamic priority sequencing policy to maximize the mean throughput rate in a multistation, multiclass closed queueing network with general service time distributions and a general routing structure. Under balanced heavy loading conditions, this scheduling problem can be approximated by a control problem involving Brownian motion. Although a unique, closed form solution to the Brownian control problem is not derived, an analysis of the problem leads to an effective static sequencing policy, and to an approximate means of comparing the relative performance of arbitrary static policies. Three examples are given that illustrate the effectiveness of our procedure

    Scheduling a Make-To-Stock Queue: Index Policies and Hedging Points

    Get PDF
    A single machine produces several different classes of items in a make-to-stock mode. We consider the problem of scheduling the machine to regulate finished goods inventory, minimizing holding and backorder or holding and lost sales costs. Demands are Poisson, service times are exponentially distributed, and there are no delays or costs associated with switching products. A scheduling policy dictates whether the machine is idle or busy, and specifies the job class to serve in the latter case. Since the optimal solution can only be numerically computed for problems with several products, our goal is to develop effective policies that are computationally tractable for a large number of products. We develop index policies to decide which class to serve, including Whittle's "restless bandit" index, which possesses a certain asymptotic optimality. Several idleness policies, which are characterized by hedging points, are derived, and the best policy is obtained from a heavy traffic diffusion approximation. Nine sample problems are considered in a numerical study, and the average suboptimality of the best policy is less than 3%

    Optimal Control of Two-Station Tandem Production/Inventory System

    Get PDF
    A manufacturing facility consisting of two stations in tandem operates in a maketo-stock mode: after production, items are placed in a finished goods inventory that services an exogenous demand. Demand that cannot be met from inventory is backordered. Each station is modelled as a queue with controllable production rate, and the problem is to control these rates to minimize inventory holding and backordering costs. Optimal controls are computed using dynamic programming and compared with kanban and buffer control mechanisms, popular in manufacturing, and with the base stock mechanism popular in inventory/distribution systems. Conditions are found under which certain simple controls are optimal using stochastic coupling arguments. Insights are gained into when to hold work-in-process and finished goods inventory, comparable to previous studies of production lines in make-to-order and unlimited demand ("push") environments

    Analysis of a decentralized production-inventory system

    Get PDF
    We model an isolated portion of a competitive supply chain as a M/M/1 make-to-stock queue. The retailer carries finished goods inventory to service a Poisson demand process, and specifies a policy for replenishing his inventory from an upstream supplier. The supplier chooses the service rate, i.e., the capacity of his manufacturing facility, which behaves as a single-server queue with exponential service times. Demand is backlogged and both agents share the backorder cost. In addition, a linear inventory holding cost is charged to the retailer, and a linear cost for building production capacity is incurred by the supplier. The inventory level, demand rate, and cost parameters are common knowledge to both agents. Under the continuous-state approximation where the M/M/1 queue has an exponential rather than geometric steady-state distribution, we characterize the optimal centralized and Nash solutions, and show that a contract with linear transfer payments replicates a cost-sharing agreement and coordinates the system. We also compare the total system costs, the agents' decision variables, and the customer service levels of the centralized versus Nash versus Stackelberg solutions. (Make-to-Stock Queue; Game Theory

    Detecting Bioterror Attacks by Screening Blood Donors: A Best-Case Analysis

    Get PDF
    To assess whether screening blood donors could provide early warning of a bioterror attack, we combined stochastic models of blood donation and the workings of blood tests with an epidemic model to derive the probability distribution of the time to detect an attack under assumptions favorable to blood donor screening. Comparing the attack detection delay to the incubation times of the most feared bioterror agents shows that even under such optimistic conditions, victims of a bioterror attack would likely exhibit symptoms before the attack was detected through blood donor screening. For example, an attack infecting 100 persons with a noncontagious agent such as Bacillus anthracis would only have a 26% chance of being detected within 25 days; yet, at an assumed additional charge of 10pertest,donorscreeningwouldcost10 per test, donor screening would cost 139 million per year. Furthermore, even if screening tests were 99.99% specific, 1,390 false-positive results would occur each year. Therefore, screening blood donors for bioterror agents should not be used to detect a bioterror attack
    corecore