254 research outputs found

    Optimal and Heuristic Resource Allocation Policies in Serial Production Systems

    Get PDF
    We have studied the optimal server allocation policies for a tandem queueing system under different system settings. Motivated by an industry project, we have studied a two stage tandem queueing system with arrival to the system and having two flexible servers capable of working at either of the stations. In our research, we studied the system under two different circumstances: modeling the system to maximize throughput without cost considerations, modeling the system to include switching and holding costs along with revenue for finished goods. In the maximizing throughput scenario, we considered two different types of server allocations: collaborative and non-collaborative. For the collaborative case, we identified the optimal server allocation policies for the servers and have proved the structure of the optimal server allocation policy using mathematical iteration techniques. Moreover, we found that, it is optimal to allocate both the servers together all the time to get maximum throughput. In the non-collaborative case, we have identified the optimal server allocation policies and found that it is not always optimal to allocate both the servers together. With the inclusion of costs, we studied the system under two different scenarios: system with switching costs only and system having both switching and holding costs. In both the cases, we have studied the optimal server allocation policies for the servers. Due to the complicated structure of the optimal server allocation policy, we have studied three different heuristics to approximate the results of the optimal policy. We found that the performance of one of the heuristics is very close to the optimal policy values

    MAXIMIZING THROUGHPUT USING DYNAMIC RESOURCE ALLOCATION AND DISCRETE EVENT SIMULATION

    Get PDF
    This research studies a serial two stage production system with two flexible servers which can be dynamically assigned to either station. This is modeled using discrete event simulation and more specifically the Arena software package by Rockwell. The goal is to determine dynamic allocation policies based upon the inventory level at each station to maximize the throughput of finished goods out of the system. This model adds to previous work by including actual switching time. The effect of the pre-emptive resume assumption is gauged, and the effectiveness of the OptQuest optimization package is also tested. Studies are conducted to determine the throughput of the system using easily implementable heuristics including when workers are together and separate. Additionally, the effect of buffer allocation and buffer sizing are studied, and it is shown that buffer allocation is not sensitive to changes in buffer ratio as long as there is buffer space available at each station while adding buffer space has a diminishing rate of return

    Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers

    Get PDF
    Abstract For tandem queues with no buffer spaces and both dedicated and flexible servers, we study how flexible servers should be assigned to maximize the throughput. When there is one flexible server and two stations each with a dedicated server, we completely characterize the optimal policy. We use the insights gained from applying the Policy Iteration algorithm on systems with three, four, and five stations to devise heuristics for systems of arbitrary size. These heuristics are verified by numerical analysis. We also discuss the throughput improvement, when for a given server assignment, dedicated servers are changed to flexible servers

    Flexibly serving a finite number of heterogeneous jobs in a tandem system

    Get PDF
    Ministry of Education, Singapore under its Academic Research Funding Tier

    Optimal Design and Control of Finite-Population Queueing Systems

    Get PDF
    We consider a service system with a finite population of customers (or jobs) and a service resource with finite capacity. We model this finite-population queueing system by a closed queueing network with two stages. The first stage, which represents the arrivals of customers for service, consists of an automated station with ample capacity. The second stage, which represents the service for customers, consists of multiple service stations which share the finite service resource. We consider both discrete and continuous service resources. We are interested in static or dynamic allocation of the service resource to the service stations in the second stage in order to optimize a given system measure. Specifically, a static allocation refers to a design problem, while a dynamic allocation refers to a control problem. In this thesis, we study both. For control problems, we specify a parallel-series structure for service stations. We first consider dynamically allocating a single flexible server under both preemptive and non-preemptive policies. We characterize the optimal policies of dynamically scheduling this single server in order to maximize the long-run average throughput of the system. In the special case of a series system, we show that the optimal policy is a sequential policy where each customer is served by the single server sequentially from the first station until the last one. For a parallel system, we show that there exists an optimal policy which gives the highest priority to the station that has the largest service rate. We also propose an index policy heuristic for the general parallel-series system and compare its performance as opposed to the optimal policy by a numerical study. Finally, we study dynamically allocating a finite amount of continuous service resource for the parallel system. For design problems, we consider allocating a finite amount of service resource which is continuously divisible and can be used at any of the service stations. Suppose that service times at a service station are exponentially distributed and their mean is a strictly increasing and concave function of the allocated service resource. We characterize the optimal allocation of the continuous resource in order to maximize the long-run average throughput of the system. We first show that the system throughput is non-decreasing in the number of customers. Then, we study the optimization problem in three cases depending on the population size of customers in the system. First, when there is a single customer, we show that the optimal allocation is given by a set of optimality equations. Secondly, when the number of customers approaches infinity, we show that the optimal allocation approaches to a limit. Finally, for any finite number of customers, we show that the system throughput is bounded up by a limit. Moreover, under a certain condition, we show that the system throughput function is Schur-concave.Doctor of Philosoph

    Scheduling in Queueing Systems with Specialized or Error-prone Servers

    Get PDF
    Consider a multi-server queueing system with tandem stations, finite intermediate buffers, and an infinite supply of jobs in front of the first station. Our goal is to maximize the long-run average throughput of the system by dynamically assigning the servers to the stations. For the first part of this thesis, we analyze a form of server coordination named task assignment where each job is decomposed into subtasks assigned to one or more servers, and the job is finished when all its subtasks are completed. We identify the optimal task assignment policy of a queueing station when the servers are either static, flexible, or collaborative. Next, we compare task assignment approaches with other forms of server assignment, namely teamwork and non-collaboration, and obtain conditions for when and how to choose a server coordination approach under different service rates. In particular, task assignment is best when the servers are highly specialized; otherwise, teamwork or non-collaboration are preferable depending on whether the synergy level among the servers is high or not. Then, we provide numerical results that quantify our previous comparison. Finally, we analyze server coordination for longer lines, where there are precedence relationships between some of the tasks. We show that for static task assignment, internal buffers at the stations are preferable to intermediate buffers between the stations, and we present numerical results that suggest our comparisons for one station systems generalize to longer lines. The second part of this thesis studies server allocation when the servers can work in teams and the team service rates can be arbitrary. Our objective is to improve the performance of the system by dynamically assigning servers to teams and teams to stations. We first establish sufficient criteria for eliminating inferior teams, and then we identify the optimal policy among the remaining teams for the two-station case. Next, we investigate the special cases with structured team service rates and with teams of specialists. Finally, we provide heuristic policies for longer lines with teams of specialists, and numerical results that suggest that our heuristic policies are near-optimal. In the final part of this dissertation, we consider the scenario where a job might be broken and wasted when being processed by a server. Servers are flexible but non-collaborative, so that a job can be processed by at most one server at any time. We identify the dynamic server assignment policy that maximizes the long-run average throughput of the system with two stations and two servers. We find that the optimal policy is either a single or a double threshold policy on the number of jobs in the buffer, where the thresholds depend on the service rates and defect probabilities of the two servers. For larger systems, we provide a partial characterization of the optimal policy. In particular, we show that the optimal policy may involve server idling, and if there exists a distinct dominant server at each station, then it is optimal to always assign the servers to the stations where they are dominant. Finally, we propose heuristic server assignment policies motivated by experimentation with three-station lines and analysis of systems with infinite buffers. Numerical results suggest that our heuristics yield near-optimal performance for systems with more than two stations.Ph.D

    An End-to-End Performance Analysis for Service Chaining in a Virtualized Network

    Full text link
    Future mobile networks supporting Internet of Things are expected to provide both high throughput and low latency to user-specific services. One way to overcome this challenge is to adopt Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC). Besides latency constraints, these services may have strict function chaining requirements. The distribution of network functions over different hosts and more flexible routing caused by service function chaining raise new challenges for end-to-end performance analysis. In this paper, as a first step, we analyze an end-to-end communications system that consists of both MEC servers and a server at the core network hosting different types of virtual network functions. We develop a queueing model for the performance analysis of the system consisting of both processing and transmission flows. We propose a method in order to derive analytical expressions of the performance metrics of interest. Then, we show how to apply the similar method to an extended larger system and derive a stochastic model for such systems. We observe that the simulation and analytical results coincide. By evaluating the system under different scenarios, we provide insights for the decision making on traffic flow control and its impact on critical performance metrics.Comment: 30 pages. arXiv admin note: substantial text overlap with arXiv:1811.0233

    Cost sharing of cooperating queues in a Jackson network

    Get PDF
    We consider networks of queues in which the independent operators of individual queues may cooperate to reduce the amount of waiting. More specifically, we focus on Jackson networks in which the total capacity of the servers can be redistributed over all queues in any desired way. If we associate a cost to waiting that is linear in the queue lengths, it is known how the operators should share the available service capacity to minimize the long run total cost. We answer the question whether or not (the operators of) the individual queues will indeed cooperate in this way, and if so, how they will share the cost in the new situation. One of the results is an explicit cost allocation that is beneficial for all operators. The approach used also works for other cost functions, such as the server utilization

    EUROPEAN CONFERENCE ON QUEUEING THEORY 2016

    Get PDF
    International audienceThis booklet contains the proceedings of the second European Conference in Queueing Theory (ECQT) that was held from the 18th to the 20th of July 2016 at the engineering school ENSEEIHT, Toulouse, France. ECQT is a biannual event where scientists and technicians in queueing theory and related areas get together to promote research, encourage interaction and exchange ideas. The spirit of the conference is to be a queueing event organized from within Europe, but open to participants from all over the world. The technical program of the 2016 edition consisted of 112 presentations organized in 29 sessions covering all trends in queueing theory, including the development of the theory, methodology advances, computational aspects and applications. Another exciting feature of ECQT2016 was the institution of the TakĂĄcs Award for outstanding PhD thesis on "Queueing Theory and its Applications"
    • 

    corecore