518 research outputs found

    Fluid Approximation of a Call Center Model with Redials and Reconnects

    Get PDF
    In many call centers, callers may call multiple times. Some of the calls are re-attempts after abandonments (redials), and some are re-attempts after connected calls (reconnects). The combination of redials and reconnects has not been considered when making staffing decisions, while ignoring them will inevitably lead to under- or overestimation of call volumes, which results in improper and hence costly staffing decisions. Motivated by this, in this paper we study call centers where customers can abandon, and abandoned customers may redial, and when a customer finishes his conversation with an agent, he may reconnect. We use a fluid model to derive first order approximations for the number of customers in the redial and reconnect orbits in the heavy traffic. We show that the fluid limit of such a model is the unique solution to a system of three differential equations. Furthermore, we use the fluid limit to calculate the expected total arrival rate, which is then given as an input to the Erlang A model for the purpose of calculating service levels and abandonment rates. The performance of such a procedure is validated in the case of single intervals as well as multiple intervals with changing parameters

    Preemptive Resume Priority Call Center Model with Two Classes of MAP Arrivals

    Get PDF
    Generally in call centers, voice calls (say Type 1 calls) are given higher priority over e-mails (say Type 2 calls). An arriving Type 1 call has a preemptive priority over a Type 2 call in service, if any, and the preempted Type 2 call enters into a retrial buffer (of finite capacity). Any arriving call not able to get into service immediately will enter into the pool of repeated calls provided the buffer is not full; otherwise, the call is considered lost. The calls in the retrial pool are treated alike (like Type 1) and compete for service after a random amount of time, and can preempt a Type 2 call in service. We assume that the two types of calls arrive according to a Markovian arrival process (MAP) and the services are offered with preemptive priority rule. Under the assumption that the service times are exponentially distributed with possibly different rates, we analyze the model using matrix-analytic methods. Illustrative numerical examples to bring out the qualitative aspects of the model under study are presented

    A Note on an M/M/s Queueing System with two Reconnect and two Redial Orbits

    Get PDF
    A queueing system with two reconnect orbits, two redial (retrial) orbits, s servers and two independent Poisson streams of customers is considered. An arriving customer of type i, i = 1, 2 is handled by an available server, if there is any; otherwise, he waits in an infinite buffer queue. A waiting customer of type i who did not get connected to a server will lose his patience and abandon after an exponentially distributed amount of time, the abandoned one may leave the system (lost customer) or move into one of the redial orbits, from which he makes a new attempt to reach the primary queue, and when a customer finishes his conversation with a server, he may comeback to the system, to one of the reconnect orbits where he will wait for another service. In this paper, a fluid model is used to derive a first order approximation for the number of customers in the redial and reconnect orbits in the heavy traffic. The fluid limit of such a model is a unique solution to a system of three differential equations

    Essays on Service Information, Retrials and Global Supply Chain Sourcing

    Get PDF
    In many service settings, customers have to join the queue without being fully aware of the parameters of the service provider (for e.g., customers at check-out counters may not know the true service rate prior to joining). In such blind queues\u27\u27, customers typically make their decisions based on the limited information about the service provider\u27s operational parameters from past experiences, reviews, etc. In the first essay, we analyze a firm serving customers who make decisions under arbitrary beliefs about the service parameters. We show, while revealing the service information to customers improves revenues under certain customer beliefs, it may however destroy consumer welfare or social welfare. When consumers can self-organize the timing of service visits, they may avoid long queues and choose to retry later. In the second essay, we study an observable queue in which consumers make rational join, balk and (costly) retry decisions. Retrial attempts could be costly due to factors such as transportation costs, retrial hassle and visit fees. We characterize the equilibrium under such retrial behavior, and study its welfare effects. With the additional option to retry, consumer welfare could worsen compared to the welfare in a system without retrials. Surprisingly, self-interested consumers retry too little (in equilibrium compared to the socially optimal policy) when the retrial cost is low, and retry too much when the retrial cost is high. We also explore the impact of myopic consumers who may not have the flexibility to retry. In the third essay, we propose a comprehensive model framework for global sourcing location decision process. For decades, off-shoring of manufacturing to China and other low-cost countries was a no-brainer decision for many U.S. companies. In recent years, however, this trend is being challenged by some companies to re-shore manufacturing back to the U.S., or to near-shore manufacturing to Mexico. Our model framework incorporates perspectives over the entire life cycle of a product, i.e., product design, manufacturing and delivering, and after-sale service support, and we use it to test the validity of various competing theories on global sourcing. We also provide numerical examples to support our findings from the model

    Resource orchestration strategies with retrials for latency-sensitive network slicing over distributed telco clouds

    Get PDF
    The new radio technologies (i.e. 5G and beyond) will allow a new generation of innovative services operated by vertical industries (e.g. robotic cloud, autonomous vehicles, etc.) with more stringent QoS requirements, especially in terms of end-to-end latency. Other technological changes, such as Network Function Virtualization (NFV) and Software-Defined Networking (SDN), will bring unique service capabilities to networks by enabling flexible network slicing that can be tailored to the needs of vertical services. However, effective orchestration strategies need to be put in place to offer latency minimization while also maximizing resource utilization for telco providers to address vertical requirements and increase their revenue. Looking at this objective, this paper addresses a latency-sensitive orchestration problem by proposing different strategies for the coordinated selection of virtual resources (network, computational, and storage resources) in distributed DCs while meeting vertical requirements (e.g., bandwidth demand) for network slicing. Three orchestration strategies are presented to minimize latency or the blocking probability through effective resource utilization. To further reduce the slice request blocking, orchestration strategies also encompass a retrial mechanism applied to rejected slice requests. Regarding latency, two components were considered, namely processing and network latency. An extensive set of simulations was carried out over a wide and composite telco cloud infrastructure in which different types of data centers coexist characterized by a different network location, size, and processing capacity. The results compare the behavior of the strategies in addressing latency minimization and service request fulfillment, also considering the impact of the retrial mechanism.This work was supported in part by the Department of Excellence in Robotics and Artificial Intelligence by Ministero dell’Istruzione, dell’Università e della Ricerca (MIUR) to Scuola Superiore Sant’Anna, and in part by the Project 5GROWTH under Agreement 856709

    Single server retrial queues with speed scaling: analysis and performance evaluation

    Get PDF
    Recently, queues with speed scaling have received considerable attention due to their applicability to data centers, enabling a better balance between performance and energy consumption. This paper proposes a new model where blocked customers must leave the service area and retry after a random time, with retrial rate either varying proportionally to the number of retrying customers (linear retrial rate) or non-varying (constant retrial rate). For both, we first study a basic case and then subsequently incorporate the concepts of a setup time and a deactivation time in extended versions of the model. In all cases, we obtain a full characterization of the stationary queue length distribution. This allows us to evaluate the performance in terms of the mentioned balance between performance and energy, using an existing cost function as well as a newly proposed variant thereof. This paper presents the derivation of the stationary distribution as well as several numerical examples of the cost-based performance evaluation

    Engineering Solution of a Basic Call-Center Model

    Full text link
    • …
    corecore