80 research outputs found

    Optimal and Heuristic Resource Allocation Policies in Serial Production Systems

    Get PDF
    We have studied the optimal server allocation policies for a tandem queueing system under different system settings. Motivated by an industry project, we have studied a two stage tandem queueing system with arrival to the system and having two flexible servers capable of working at either of the stations. In our research, we studied the system under two different circumstances: modeling the system to maximize throughput without cost considerations, modeling the system to include switching and holding costs along with revenue for finished goods. In the maximizing throughput scenario, we considered two different types of server allocations: collaborative and non-collaborative. For the collaborative case, we identified the optimal server allocation policies for the servers and have proved the structure of the optimal server allocation policy using mathematical iteration techniques. Moreover, we found that, it is optimal to allocate both the servers together all the time to get maximum throughput. In the non-collaborative case, we have identified the optimal server allocation policies and found that it is not always optimal to allocate both the servers together. With the inclusion of costs, we studied the system under two different scenarios: system with switching costs only and system having both switching and holding costs. In both the cases, we have studied the optimal server allocation policies for the servers. Due to the complicated structure of the optimal server allocation policy, we have studied three different heuristics to approximate the results of the optimal policy. We found that the performance of one of the heuristics is very close to the optimal policy values

    Optimal Control of Parallel Queues for Managing Volunteer Convergence

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/163497/2/poms13224.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/163497/1/poms13224_am.pd

    Resource Pooling and Cost Allocation Among Independent Service Providers

    Full text link

    Enabling flexibility through strategic management of complex engineering systems

    Get PDF
    ”Flexibility is a highly desired attribute of many systems operating in changing or uncertain conditions. It is a common theme in complex systems to identify where flexibility is generated within a system and how to model the processes needed to maintain and sustain flexibility. The key research question that is addressed is: how do we create a new definition of workforce flexibility within a human-technology-artificial intelligence environment? Workforce flexibility is the management of organizational labor capacities and capabilities in operational environments using a broad and diffuse set of tools and approaches to mitigate system imbalances caused by uncertainties or changes. We establish a baseline reference for managers to use in choosing flexibility methods for specific applications and we determine the scope and effectiveness of these traditional flexibility methods. The unique contributions of this research are: a) a new definition of workforce flexibility for a human-technology work environment versus traditional definitions; b) using a system of systems (SoS) approach to create and sustain that flexibility; and c) applying a coordinating strategy for optimal workforce flexibility within the human- technology framework. This dissertation research fills the gap of how we can model flexibility using SoS engineering to show where flexibility emerges and what strategies a manager can use to manage flexibility within this technology construct”--Abstract, page iii

    Patient Streaming as a Mechanism for Improving Responsiveness in Emergency Departments

    Full text link
    Crisis level overcrowding conditions in Emergency Departments (ED's) have led hospitals to seek out new patient flow designs to improve both responsiveness and safety. One approach that has attracted attention and experimentation in the emergency medicine community is a system in which ED beds and care teams are segregated and patients are "streamed" based on predictions of whether they will be discharged or admitted to the hospital. In this paper, we use a combination of analytic and simulation models to determine whether such a streaming policy can improve ED performance, where it is most likely to be effective, and how it should be implemented for maximum performance. Our results suggest that the concept of streaming can indeed improve patient flow, but only in some situations. First, ED resources must be shared across streams rather than physically separated. This leads us to propose a new "virtual-streaming" patient flow design for ED's. Second, this type of streaming is most effective in ED's with (1) a high percentage of admitted patients, (2) longer care times for admitted patients than discharged patients, (3) a high day-to-day variation in the percentage of admitted patients, (4) long patient boarding times (e.g., caused by hospital "bed-block"), and (5) high average physician utilization. Finally, to take full advantage of streaming, physicians assigned to admit patients should prioritize upstream (new) patients, while physicians assigned to discharge patients should prioritize downstream (old) patients.http://deepblue.lib.umich.edu/bitstream/2027.42/85792/1/1162_Hopp.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/85792/4/2012Jan18WHopp#1162.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/85792/6/1162_Hopp_mar12.pd

    Scheduling in Queueing Systems with Specialized or Error-prone Servers

    Get PDF
    Consider a multi-server queueing system with tandem stations, finite intermediate buffers, and an infinite supply of jobs in front of the first station. Our goal is to maximize the long-run average throughput of the system by dynamically assigning the servers to the stations. For the first part of this thesis, we analyze a form of server coordination named task assignment where each job is decomposed into subtasks assigned to one or more servers, and the job is finished when all its subtasks are completed. We identify the optimal task assignment policy of a queueing station when the servers are either static, flexible, or collaborative. Next, we compare task assignment approaches with other forms of server assignment, namely teamwork and non-collaboration, and obtain conditions for when and how to choose a server coordination approach under different service rates. In particular, task assignment is best when the servers are highly specialized; otherwise, teamwork or non-collaboration are preferable depending on whether the synergy level among the servers is high or not. Then, we provide numerical results that quantify our previous comparison. Finally, we analyze server coordination for longer lines, where there are precedence relationships between some of the tasks. We show that for static task assignment, internal buffers at the stations are preferable to intermediate buffers between the stations, and we present numerical results that suggest our comparisons for one station systems generalize to longer lines. The second part of this thesis studies server allocation when the servers can work in teams and the team service rates can be arbitrary. Our objective is to improve the performance of the system by dynamically assigning servers to teams and teams to stations. We first establish sufficient criteria for eliminating inferior teams, and then we identify the optimal policy among the remaining teams for the two-station case. Next, we investigate the special cases with structured team service rates and with teams of specialists. Finally, we provide heuristic policies for longer lines with teams of specialists, and numerical results that suggest that our heuristic policies are near-optimal. In the final part of this dissertation, we consider the scenario where a job might be broken and wasted when being processed by a server. Servers are flexible but non-collaborative, so that a job can be processed by at most one server at any time. We identify the dynamic server assignment policy that maximizes the long-run average throughput of the system with two stations and two servers. We find that the optimal policy is either a single or a double threshold policy on the number of jobs in the buffer, where the thresholds depend on the service rates and defect probabilities of the two servers. For larger systems, we provide a partial characterization of the optimal policy. In particular, we show that the optimal policy may involve server idling, and if there exists a distinct dominant server at each station, then it is optimal to always assign the servers to the stations where they are dominant. Finally, we propose heuristic server assignment policies motivated by experimentation with three-station lines and analysis of systems with infinite buffers. Numerical results suggest that our heuristics yield near-optimal performance for systems with more than two stations.Ph.D

    Optimal Dynamic Control of Queueing Networks: Emergency Departments, the W Service Network, and Supply Chains under Disruptions.

    Full text link
    Many systems in both the service and manufacturing sectors can be modeled and analyzed as queueing networks. In such systems, control and design is often an important issue that may significantly affect the performance. This dissertation focuses on the development of innovative techniques for the design and control of such systems. Special attention is given to real-world applications in (a) the design and control of patient flow in the hospital emergency departments, (b) design and control of service/call centers, and (c) the design and control of supply chains under disruption risks. With respect to application (a), using hospital data, analytical models, and simulation analyses we show how (1) better patient prioritization, (2) enhanced triage systems, and (3) improved patient flow designs allow emergency departments to significantly improve their performance with respect to both operational efficiency and patient safety. Regarding application (b), we give specific attention to a two-server and three-demand class network in the shape of a ``W'' with random server disruption and repair times. Studying this network, we show how effective control and design strategies that efficiently make use of (partial) flexibility of servers can be implemented to achieve high performance and resilience to server disruptions. In addition to establishing stability properties of different known control mechanisms, a new heuristic policy, termed Largest Expected Workload Cost (LEWC), is proposed and its performance is extensively benchmarked with respect to other widely used polices. Regarding application (c), we demonstrate how supply chains can boost their performance using better control and design strategies that efficiently take into account supply disruption risks. Motivated by several real-world examples of disruptions, production flexibility, and supply contracts within supply chains, we model the informational and operational flexibility approaches to designing a resilient supply chain. By analyzing optimal ordering policies, sourcing strategies, and the optimal levels of back-up capacity reservation contracts, various disruption risk mitigation strategies are considered and compared, and new insights into the design of resilient supply chains are provided.PHDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/94002/1/soroush_1.pd
    • 

    corecore