1,473 research outputs found

    A method for estimation of redial and reconnect probabilities in call centers

    Get PDF
    In practice, many call center forecasters use the total inbound volume to make forecasts. In reality, besides the fresh calls (initial call attempts), there are many redials (re-attempts after abandonments) and reconnects (re-attempts after answered calls) in call centers. Neglecting redials and reconnects will inevitably lead to inaccurate forecasts, which eventually leads to inaccurate staffing decisions. However, most of the call center data sets do not have customer-identity information, which makes it difficult to identify how many calls are fresh. Motivated by this, the goal of this paper is to estimate the number of fresh calls, and the redial and reconnect probabilities. To this end, we propose a model to estimate these three variables. We formulate our estimation model as a minimization problem, where the actual redial and reconnect probabilities lead to the minimum objective value. We validate our estimation results via real call center data and simulated data

    Predictive Analytics to Increase Roster Robustness in an Inbound Call Center

    Get PDF
    Staff rostering is a crucial task in inbound call centers, as personnel costs usually account for the largest share of operating costs. Uncertainty of capacity, such as the presence of agents is often disregarded during rostering. This paper addresses the problem of uncertainty by using predictive analytics to predict agent absences and thus increase roster robustness. Operational data from four years of a call center serves as a basis for our use case. Predictors include characteristics of the service agents such as attendance history and regular working hours as well as other factors such as the weekday. Of the prediction algorithms tested, decision trees outperform other predictive modeling approaches. Evaluation based on an expected value framework shows that the predictive analytics approach performs best compared to the planned, unchanged roster and a general staff surcharge of 10%

    Feature selection strategies for improving data-driven decision support in bank telemarketing

    Get PDF
    The usage of data mining techniques to unveil previously undiscovered knowledge has been applied in past years to a wide number of domains, including banking and marketing. Raw data is the basic ingredient for successfully detecting interesting patterns. A key aspect of raw data manipulation is feature engineering and it is related with the correct characterization or selection of relevant features (or variables) that conceal relations with the target goal. This study is particularly focused on feature engineering, aiming at the unfolding features that best characterize the problem of selling long-term bank deposits through telemarketing campaigns. For the experimental setup, a case-study from a Portuguese bank, ranging the 2008-2013 year period and encompassing the recent global financial crisis, was addressed. To assess the relevance of such problem, a novel literature analysis using text mining and the latent Dirichlet allocation algorithm was conducted, confirming the existence of a research gap for bank telemarketing. Starting from a dataset containing typical telemarketing contacts and client information, research followed three different and complementary strategies: first, by enriching the dataset with social and economic context features; then, by including customer lifetime value related features; finally, by applying a divide and conquer strategy for splitting the problem in smaller fractions, leading to optimized sub-problems. Each of the three approaches improved previous results in terms of model metrics related to prediction performance. The relevance of the proposed features was evaluated, confirming the obtained models as credible and valuable for telemarketing campaign managers.A utilização de técnicas de data mining para a descoberta de conhecimento tem sido aplicada nos últimos anos a uma grande variedade de domínios, incluindo banca e marketing. Os dados no seu estado primitivo constituem o ingrediente básico para a deteção de padrões de informação. Um aspeto chave da manipulação de dados em bruto consiste na "engenharia de atributos", que compreende uma correta definição e seleção de atributos relevantes (ou variáveis) que se relacionem com o alvo da descoberta de conhecimento. Este trabalho foca-se numa abordagem de "engenharia de atributos" para definir as variáveis que melhor caraterizam o problema de vender depósitos bancários a prazo através de campanhas de telemarketing. Sendo um estudo empírico, foi utilizado um caso de estudo de um banco português, abrangendo o período 2008-2013, que inclui os efeitos da crise financeira internacional. Para aferir da importância deste problema, foi realizada uma inovadora análise da literatura recorrendo a text mining e ao algoritmo latent Dirichlet allocation, confirmando a existência de uma lacuna nesta matéria. Utilizando como base um conjunto de dados de contactos de telemarketing e informação sobre os clientes, três estratégias diferentes e complementares foram propostas: primeiro, os dados foram enriquecidos com atributos socioeconómicos; posteriormente, foram adicionadas características associadas ao valor do cliente ao longo do seu tempo de vida; finalmente, o problema foi dividido em problemas mais específicos, permitindo abordagens otimizadas a cada subproblema. Cada abordagem melhorou as métricas associadas à capacidade preditiva do modelo. Adicionalmente, a relevância dos atributos foi avaliada, confirmando os modelos obtidos como credíveis e valiosos para gestores de campanhas de telemarketing

    Qualitative Strategy for Inbound Call Center Outsourcing

    Get PDF
    An analysis of the various challenges of the call center industry, together with the challenges of outsourcing, revealed a need for developing a strategy that acts as a guide for organizations that are willing to outsource their call center operations. This research therefore develops a strategy for this purpose. The research first provides mitigation strategies for the challenges of outsourcing and the challenges of the call center industry, followed by a strategy for the outsourcing of call center services. Telephone call centers are an integral part of today‘s business world, serving as a primary channel for customer contact for organizations in many industries. Globalization, the advancements in the telecommunication and technology industries, and the availability of cost effective work forces around the world are compelling organizations to outsource their functions (call center services) to reap the benefits that come with outsourcing. Organizations outsource functions, especially a function that is not their core competence, for a multitude of reasons. These reasons may include cost savings, quality enhancement/improvement, reduced time to market, tax benefits, and risk management. Outsourcing also comes with its share of issues. A few examples of the challenges involved in outsourcing include cultural differences, knowledge transfer to suppliers while protecting intellectual property (IP), knowledge retention, language barriers, ethics, norms of behavior, distance and time zones, infrastructure, privacy and security, skill set/quality, objectivity, geopolitical climate, labor backlash, communication, end-user resistance, and governance. There are also many challenges associated with the call center industry, such as, but not limited to, deploying the right number of staff members with the right skills to the right schedules in order to meet an uncertain and time-varying demand of service, forecasting traffic, acquiring capacity, deploying resources, and managing service delivery. Therefore, despite the advancements in telecommunications and information technology, the challenges faced by client organizations that outsource their inbound call center services abound. While choosing outsourcing/offshoring as their strategy, an organization can avoid many of the disadvantages that arise due these risks/issues by adapting a proactive and careful approach such as the strategy developed in this research

    Platelet inventory management in blood supply chain under demand and supply uncertainty

    Get PDF
    Supply chain management of blood and its products are of paramount importance in medical treatment due to its perishable nature, uncertain demand, and lack of auxiliary substitutes. For example, the Red Blood Cells (RBC's) have a life span of approximately 40 days, whereas platelets have a shelf life of up to five days after extraction from the human body. According to the World Health Organization, approximately 112 million blood units are collected worldwide annually. However, nearly 20 percent of units are discarded in developed nations due to being expired before the final use. A similar trend is noticed in developing countries as well. On the other hand, blood shortage could lead to elective surgeries cancellations. Therefore, managing blood distribution and developing an efficient blood inventory management is considered a critical issue in the supply chain domain. A standard blood supply chain (BSC) achieves the movement of blood products (red blood cells, white blood cells, and platelets) from initial collection to final patients in several echelons. The first step comprises of donation of blood by donors at the donation or mobile centers. The donation sites transport the blood units to blood centers where several tests for infections are carried out. The blood centers then store either the whole blood units or segregate them into their individual products. Finally, they are distributed to the healthcare facilities when required. In this dissertation, an efficient forecasting model is developed to forecast the supply of blood. We leverage five years' worth of historical blood supply data from the Taiwan Blood Services Foundation (TBSF) to conduct our forecasting study. With the generated supply and demand distributioins from historial supply and demand data as inputs, a single objective stochastic model is developed to determine the number of platelet units to order and the time between orders at the hospitals. To reduce platelet shortage and outdating, a collaborative network between the blood centers and hospitals is proposed; the model is extended to determine the optimal ordering policy for a divergent network consisting of multiple blood centers and hospitals. It has been shown that a collaborative system of blood centers and hospitals is better than a decentralized system in which each hospital is supplied with blood only by its corresponding blood center. Furthermore, a mathematical model is proposed based on multi-criteria decision-making (MCDM) techniques, in which different conflicting objective functions are satisfied to generate an efficient and satisfactory solution for a blood supply chain comprising of two hospitals and one blood center. This study also conducted a sensitivity analysis to examine the impacts of the coefficient of demand and supply variation and the settings of cost parameters on the average total cost and the performance measures (units of shortage, outdated units, inventory holding units, and purchased units) for both the blood center and hospitals. The proposed models can also be applied to determine ordering policies for other supply chain of perishable products, such as perishable food or drug supply chains.Includes bibliographical references

    Workshop sensing a changing world : proceedings workshop November 19-21, 2008

    Get PDF

    An energy-efficient adaptive sampling scheme for wireless sensor networks

    Get PDF
    Wireless sensor networks are new monitoring platforms. To cope with their resource constraints, in terms of energy and bandwidth, spatial and temporal correlation in sensor data can be exploited to find an optimal sampling strategy to reduce number of sampling nodes and/or sampling frequencies while maintaining high data quality. Majority of existing adaptive sampling approaches change their sampling frequency upon detection of (significant) changes in measurements. There are, however, applications that can tolerate (significant) changes in measurements as long as measurements fall within a specific range. Using existing adaptive sampling approaches for these applications is not energy-efficient. Targeting this type of applications, in this paper, we propose an energy-efficient adaptive sampling technique ensuring a certain level of data quality. We compare our proposed technique with two existing adaptive sampling approaches in a simulation environment and show its superiority in terms of energy efficiency and data quality
    • …
    corecore