1,919 research outputs found

    Arrival first queueing networks with applications in kanban production systems

    Get PDF
    In this paper we introduce a new class of queueing networks called {\it arrival first networks}. We characterise its transition rates and derive the relationship between arrival rules, linear partial balance equations, and product form stationary distributions. This model is motivated by production systems operating under a kanban protocol. In contrast with the conventional {\em departure first networks}, where a transition is initiated by service completion of items at the originating nodes that are subsequently routed to the destination nodes (push system), in an arrival first network a transition is initiated by the destination nodes of the items and subsequently those items are processed at and removed from the originating nodes (pull system). These are similar to the push and pull systems in manufacturing systems

    Performance evaluation of a decoupling inventory for hybrid push-pull systems

    Get PDF
    Nowadays, companies that oer product variety while maintaining short lead times and competitive quality and cost, gain a competitive edge over their competitors. Therefore, hybrid push-pull systems allow for efficiently balancing lead times and production costs. Raw materials are `pushed' into the semi-finished good warehouse and customers `pull' products by placing orders. As performance of the decoupling stock is critical to the overall performance of the manufacturing system, we define and analyse a Markovian queueing model with two buers, thereby accounting for both the decoupling stock as well as for possible backlog of orders. In particular, our study assesses the eect of variability in the production process and the ordering process on the performance of the decoupling stock

    Non-Existence of Stabilizing Policies for the Critical Push-Pull Network and Generalizations

    Get PDF
    The push-pull queueing network is a simple example in which servers either serve jobs or generate new arrivals. It was previously conjectured that there is no policy that makes the network positive recurrent (stable) in the critical case. We settle this conjecture and devise a general sufficient condition for non-stabilizability of queueing networks which is based on a linear martingale and further applies to generalizations of the push-pull network.Comment: 14 pages, 3 figure

    Design and operational control of an AGV system

    Get PDF
    In this paper we first deal with the design and operational control of Automated Guided Vehicle (AGV) systems, starting from the literature on these topics. Three main issues emerge: track layout, the number of AGVs required and operational transportation control. An hierarchical queueing network approach to determine the number of AGVs is decribed. Also basic concepts are presented for the transportation control of both a job-shop and a flow-shop. Next we report on the results of a case study, in which track layout and transportation control are the main issues. Finally we suggest some topics for further research

    An analytical comparison of the patient-to-doctor policy and the doctor-to-patient policy in the outpatient clinic

    Get PDF
    Outpatient clinics traditionally organize processes such that the doctor remains in a consultation room, while patients visit for consultation, we call this the Patient-to-Doctor policy. A different approach is the Doctor-to-Patient policy, whereby the doctor travels between multiple consultation rooms, in which patients prepare for their consultation. In the latter approach, the doctor saves time by consulting fully prepared patients. We compare the two policies via a queueing theoretic and a discrete-event simulation approach. We analytically show that the Doctor-to-Patient policy is superior to the Patient-to-Doctor policy under the condition that the doctor’s travel time between rooms is lower than the patient’s preparation time. Simulation results indicate that the same applies when the average travel time is lower than the average preparation time. In addition, to calculate the required number of consultation rooms in the Doctor-to-Patient policy, we provide an expression for the fraction of consultations that are in immediate succession; or, in other words, the fraction of time the next patient is prepared and ready, immediately after a doctor finishes a consultation.We apply our methods for a range of distributions and parameters and to a case study in a medium-sized general hospital that inspired this research

    Asymptotically optimal load balancing in large-scale heterogeneous systems with multiple dispatchers

    Get PDF
    We consider the load balancing problem in large-scale heterogeneous systems with multiple dispatchers. We introduce a general framework called Local-Estimation-Driven (LED). Under this framework, each dispatcher keeps local (possibly outdated) estimates of the queue lengths for all the servers, and the dispatching decision is made purely based on these local estimates. The local estimates are updated via infrequent communications between dispatchers and servers. We derive sufficient conditions for LED policies to achieve throughput optimality and delay optimality in heavy-traffic, respectively. These conditions directly imply delay optimality for many previous local-memory based policies in heavy traffic. Moreover, the results enable us to design new delay optimal policies for heterogeneous systems with multiple dispatchers. Finally, the heavy-traffic delay optimality of the LED framework also sheds light on a recent open question on how to design optimal load balancing schemes using delayed information

    Analytical models to determine room requirements in outpatient clinics

    Get PDF
    Outpatient clinics traditionally organize processes such that the doctor remains in a consultation room while patients visit for consultation, we call this the Patient-to-Doctor policy (PtD-policy). A different approach is the Doctor-to-Patient policy (DtP-policy), whereby the doctor travels between multiple consultation rooms, in which patients prepare for their consultation. In the latter approach, the doctor saves time by consulting fully prepared patients. We use a queueing theoretic and a discrete-event simulation approach to provide generic models that enable performance evaluations of the two policies for different parameter settings. These models can be used by managers of outpatient clinics to compare the two policies and choose a particular policy when redesigning the patient process.We use the models to analytically show that the DtP-policy is superior to the PtD-policy under the condition that the doctor’s travel time between rooms is lower than the patient’s preparation time. In addition, to calculate the required number of consultation rooms in the DtP-policy, we provide an expression for the fraction of consultations that are in immediate succession; or, in other words, the fraction of time the next patient is prepared and ready, immediately after a doctor finishes a consultation. We apply our methods for a range of distributions and parameters and to a case study in a medium-sized general hospital that inspired this research

    Modeling Stochastic Lead Times in Multi-Echelon Systems

    Get PDF
    In many multi-echelon inventory systems, the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process that generates such lead times is usually not well defined, which is especially a problem for simulation modeling. In this paper, we use results from queuing theory to define a set of simple lead time processes guaranteeing that (a) orders do not cross and (b) prespecified means and variances of all lead times in the multiechelon system are attained

    Hyper-Scalable JSQ with Sparse Feedback

    Full text link
    Load balancing algorithms play a vital role in enhancing performance in data centers and cloud networks. Due to the massive size of these systems, scalability challenges, and especially the communication overhead associated with load balancing mechanisms, have emerged as major concerns. Motivated by these issues, we introduce and analyze a novel class of load balancing schemes where the various servers provide occasional queue updates to guide the load assignment. We show that the proposed schemes strongly outperform JSQ(dd) strategies with comparable communication overhead per job, and can achieve a vanishing waiting time in the many-server limit with just one message per job, just like the popular JIQ scheme. The proposed schemes are particularly geared however towards the sparse feedback regime with less than one message per job, where they outperform corresponding sparsified JIQ versions. We investigate fluid limits for synchronous updates as well as asynchronous exponential update intervals. The fixed point of the fluid limit is identified in the latter case, and used to derive the queue length distribution. We also demonstrate that in the ultra-low feedback regime the mean stationary waiting time tends to a constant in the synchronous case, but grows without bound in the asynchronous case
    • …
    corecore