2,823 research outputs found
Statistical Analysis of a Telephone Call Center: A Queueing-Science Perspective
A call center is a service network in which agents provide telephone-based services. Customers that seek these services are delayed in tele-queues. This paper summarizes an analysis of a unique record of call center operations. The data comprise a complete operational history of a small banking call center, call by call, over a full year. Taking the perspective of queueing theory, we decompose the service process into three fundamental components: arrivals, customer abandonment behavior and service durations. Each component involves different basic mathematical structures and requires a different style of statistical analysis. Some of the key empirical results are sketched, along with descriptions of the varied techniques required. Several statistical techniques are developed for analysis of the basic components. One of these is a test that a point process is a Poisson process. Another involves estimation of the mean function in a nonparametric regression with lognormal errors. A new graphical technique is introduced for nonparametric hazard rate estimation with censored data. Models are developed and implemented for forecasting of Poisson arrival rates. We then survey how the characteristics deduced from the statistical analyses form the building blocks for theoretically interesting and practically useful mathematical models for call center operations. Key Words: call centers, queueing theory, lognormal distribution, inhomogeneous Poisson process, censored data, human patience, prediction of Poisson rates, Khintchine-Pollaczek formula, service times, arrival rate, abandonment rate, multiserver queues.
Data-Driven Robust Optimization
The last decade witnessed an explosion in the availability of data for
operations research applications. Motivated by this growing availability, we
propose a novel schema for utilizing data to design uncertainty sets for robust
optimization using statistical hypothesis tests. The approach is flexible and
widely applicable, and robust optimization problems built from our new sets are
computationally tractable, both theoretically and practically. Furthermore,
optimal solutions to these problems enjoy a strong, finite-sample probabilistic
guarantee. \edit{We describe concrete procedures for choosing an appropriate
set for a given application and applying our approach to multiple uncertain
constraints. Computational evidence in portfolio management and queuing confirm
that our data-driven sets significantly outperform traditional robust
optimization techniques whenever data is available.Comment: 38 pages, 15 page appendix, 7 figures. This version updated as of
Oct. 201
CERN: Confidence-Energy Recurrent Network for Group Activity Recognition
This work is about recognizing human activities occurring in videos at
distinct semantic levels, including individual actions, interactions, and group
activities. The recognition is realized using a two-level hierarchy of Long
Short-Term Memory (LSTM) networks, forming a feed-forward deep architecture,
which can be trained end-to-end. In comparison with existing architectures of
LSTMs, we make two key contributions giving the name to our approach as
Confidence-Energy Recurrent Network -- CERN. First, instead of using the common
softmax layer for prediction, we specify a novel energy layer (EL) for
estimating the energy of our predictions. Second, rather than finding the
common minimum-energy class assignment, which may be numerically unstable under
uncertainty, we specify that the EL additionally computes the p-values of the
solutions, and in this way estimates the most confident energy minimum. The
evaluation on the Collective Activity and Volleyball datasets demonstrates: (i)
advantages of our two contributions relative to the common softmax and
energy-minimization formulations and (ii) a superior performance relative to
the state-of-the-art approaches.Comment: Accepted to IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 201
Conformance checking and performance improvement in scheduled processes: A queueing-network perspective
Service processes, for example in transportation, telecommunications or the health sector, are the backbone of today's economies. Conceptual models of service processes enable operational analysis that supports, e.g., resource provisioning or delay prediction. In the presence of event logs containing recorded traces of process execution, such operational models can be mined automatically.In this work, we target the analysis of resource-driven, scheduled processes based on event logs. We focus on processes for which there exists a pre-defined assignment of activity instances to resources that execute activities. Specifically, we approach the questions of conformance checking (how to assess the conformance of the schedule and the actual process execution) and performance improvement (how to improve the operational process performance). The first question is addressed based on a queueing network for both the schedule and the actual process execution. Based on these models, we detect operational deviations and then apply statistical inference and similarity measures to validate the scheduling assumptions, thereby identifying root-causes for these deviations. These results are the starting point for our technique to improve the operational performance. It suggests adaptations of the scheduling policy of the service process to decrease the tardiness (non-punctuality) and lower the flow time. We demonstrate the value of our approach based on a real-world dataset comprising clinical pathways of an outpatient clinic that have been recorded by a real-time location system (RTLS). Our results indicate that the presented technique enables localization of operational bottlenecks along with their root-causes, while our improvement technique yields a decrease in median tardiness and flow time by more than 20%
Empirical assessment of VoIP overload detection tests
The control of communication networks critically relies on procedures capable of detecting unanticipated load changes. In this paper we explore such techniques, in a setting in which each connection consumes roughly the same amount of bandwidth (with VoIP as a leading example). We focus on large-deviations based techniques developed earlier in that monitor the number of connections present, and that issue an alarm when this number abruptly changes. The procedures proposed in are demonstrated by using real traces from an operational environment. Our experiments show that our detection procedure is capable of adequately identifying load changes
Modelovånà a simulace nespolehlivého E2/E2/1/m systému hromadné obsluhy
This paper is devoted to modelling and simulation of an E2/E2/1/m queueing system with a server subject to breakdowns. The paper introduces a mathematical model of the studied system and a simulation model created by using software CPN Tools, which is intended for modelling and a simulation of coloured Petri nets. At the end of the paper the outcomes which were reached by both approaches are statistically evaluated.ÄlĂĄnek je vÄnovĂĄn modelovĂĄnĂ a simulaci E2/E2/1/m systĂ©mu hromadnĂ© obsluhy S obsluĆŸnou linkou podlĂ©hajĂcĂ poruchĂĄm. PĆĂspÄvek pĆedstavuje matematickĂœ model studovanĂ©ho systĂ©mu a simulaÄnĂ model vytvoĆenĂœ S vyuĆŸitĂm software CPN Tools, kterĂœ je urÄen pro modelovĂĄnĂ a simulaci barevnĂœch Petriho sĂtĂ. V zĂĄvÄru ÄlĂĄnku jsou vĂœsledky dosaĆŸenĂ© obÄma pĆĂstupy statisticky vyhodnoceny
- âŠ