3,433 research outputs found
Application–Based Statistical Approach for Identifying Appropriate Queuing Model
Queuing theory is a mathematical study of queues or waiting lines. It is used to model many systems in different fields in our life, whether simple or complex systems. The key idea in queuing theory of a mathematical model is to improve performance and productivity of the applications. Queuing models are constructed in order to compute the performance measures for the applications and to predict the waiting times and queue lengths. This thesis is depended on previous papers of queuing theory for varies application which analyze the behavior of these applications and shows how to calculate the entire queuing statistic determined by measures of variability (mean, variance and coefficient of variance) for variety of queuing systems in order to define the appropriate queuing model. Computer simulation is an easy powerful tool to estimate approximately the proper queuing model and evaluate the performance measures for the applications. This thesis presents a new simulation model for defining the appropriate models for the applications and identifying the variables parameters that affect their performance measures. It depends on values of mean, variance and coefficient of the real applications, comparing them to the values for characteristics of the queuing model, then according to the comparison the appropriate queuing model is approximately identified.The simulation model will measure the effectiveness performance of queuing models A/B/1 where A is inter arrival distribution, B is the service time distributions of the type Exponential, Erlang, Deterministic and Hyper-exponential. The effectiveness performance of queuing model are:
*L : The expected number of arrivals in the system.
*Lq : The expected number of arrivals in the queue.
*W : The expected time required a customer to spend in
the system.
*Wq : The expected time required a customer to spend in
Queue.
*U : the server utilization
Statistical Analysis of a Telephone Call Center: A Queueing-Science Perspective
A call center is a service network in which agents provide telephone-based services. Customers that seek these services are delayed in tele-queues. This paper summarizes an analysis of a unique record of call center operations. The data comprise a complete operational history of a small banking call center, call by call, over a full year. Taking the perspective of queueing theory, we decompose the service process into three fundamental components: arrivals, customer abandonment behavior and service durations. Each component involves different basic mathematical structures and requires a different style of statistical analysis. Some of the key empirical results are sketched, along with descriptions of the varied techniques required. Several statistical techniques are developed for analysis of the basic components. One of these is a test that a point process is a Poisson process. Another involves estimation of the mean function in a nonparametric regression with lognormal errors. A new graphical technique is introduced for nonparametric hazard rate estimation with censored data. Models are developed and implemented for forecasting of Poisson arrival rates. We then survey how the characteristics deduced from the statistical analyses form the building blocks for theoretically interesting and practically useful mathematical models for call center operations. Key Words: call centers, queueing theory, lognormal distribution, inhomogeneous Poisson process, censored data, human patience, prediction of Poisson rates, Khintchine-Pollaczek formula, service times, arrival rate, abandonment rate, multiserver queues.
Recommended from our members
Scheduling, Characterization and Prediction of HPC Workloads for Distributed Computing Environments
As High Performance Computing (HPC) has grown considerably and is expected to grow even more, effective resource management for distributed computing sys- tems is motivated more than ever. As the computational workloads grow in quantity, it is becoming more crucial to apply efficient resource management and workload scheduling to use resources efficiently while keeping the computational performance reasonably good. The problem of efficiently scheduling workloads on resources while meeting performance standards is hard. Additionally, non-clairvoyance of job dimen- sions makes resource management even harder in real-world scenarios. Our research methodology investigates the scheduling problem compliant for HPC and researches the challenges for deploying the scheduling in real world-scenarios using state of the art machine learning and data science techniques.To this end, this Ph.D. dissertation makes the following core contributions: a) We perform a theoretical analysis of space-sharing, non-preemptive scheduling: we studied this scheduling problem and proposed scheduling algorithms with polyno- mial computation time. We also proved constant upper-bounds for the performance of these algorithms. b) We studied the sensitivity of scheduling algorithms to the accuracy of runtime and devised a meta-learning approach to estimate prediction accuracy for newly submitted jobs to the HPC system. c) We studied the runtime prediction problem for HPC applications. For this purpose, we studied the distri- bution of available public workloads and proposed two different solutions that can predict multi-modal distributions: switching state-space models and Mixture Density Networks. d) We studied the effectiveness of recent recurrent neural network models for CPU usage trace prediction for individual VM traces as well as aggregate CPU usage traces. In this dissertation, we explore solutions to improve the performance of scheduling workloads on distributed systems.We begin by looking at the problem from the theoretical perspective. Modeling the problem mathematically, we first propose a scheduling algorithm that finds a constant approximation of the optimal solution for the problem in polynomial time. We prove that the performance of the algorithm (average completion time is the constant approximation of the performance of the optimal scheduling. We next look at the problem in real-world scenarios. Considering High-Performance Computing (HPC) workload computing environments as the most similar real-world equivalent of our mathematical model, we explore the problem of predicting application runtime. We propose an algorithm to handle the existing uncertainties in the real world and show-case our algorithm with demonstrative effectiveness in terms of response time and resource utilization. After looking at the uncertainty problem, we focus on trying to improve the accuracy of existing prediction approaches for HPC application runtime. We propose two solutions, one based on Kalman filters and one based on deep density mixture networks. We showcase the effectiveness of our prediction approaches by comparing with previous prediction approaches in terms of prediction accuracy and impact on improving scheduling performance. In the end, we focus on predicting resource usage for individual applications during their execution. We explore the application of recurrent neural networks for predicting resource usage of applications deployed on individual virtual machines. To validate our proposed models and solutions, we performed extensive trace-driven simulation and measured the effectiveness of our approaches
Malware in the Future? Forecasting of Analyst Detection of Cyber Events
There have been extensive efforts in government, academia, and industry to
anticipate, forecast, and mitigate cyber attacks. A common approach is
time-series forecasting of cyber attacks based on data from network telescopes,
honeypots, and automated intrusion detection/prevention systems. This research
has uncovered key insights such as systematicity in cyber attacks. Here, we
propose an alternate perspective of this problem by performing forecasting of
attacks that are analyst-detected and -verified occurrences of malware. We call
these instances of malware cyber event data. Specifically, our dataset was
analyst-detected incidents from a large operational Computer Security Service
Provider (CSSP) for the U.S. Department of Defense, which rarely relies only on
automated systems. Our data set consists of weekly counts of cyber events over
approximately seven years. Since all cyber events were validated by analysts,
our dataset is unlikely to have false positives which are often endemic in
other sources of data. Further, the higher-quality data could be used for a
number for resource allocation, estimation of security resources, and the
development of effective risk-management strategies. We used a Bayesian State
Space Model for forecasting and found that events one week ahead could be
predicted. To quantify bursts, we used a Markov model. Our findings of
systematicity in analyst-detected cyber attacks are consistent with previous
work using other sources. The advanced information provided by a forecast may
help with threat awareness by providing a probable value and range for future
cyber events one week ahead. Other potential applications for cyber event
forecasting include proactive allocation of resources and capabilities for
cyber defense (e.g., analyst staffing and sensor configuration) in CSSPs.
Enhanced threat awareness may improve cybersecurity.Comment: Revised version resubmitted to journa
STOCHASTIC MODELING AND TIME-TO-EVENT ANALYSIS OF VOIP TRAFFIC
Voice over IP (VoIP) systems are gaining increased popularity due to the cost effectiveness, ease of management, and enhanced features and capabilities. Both enterprises and carriers are deploying VoIP systems to replace their TDM-based legacy voice networks. However, the lack of engineering models for VoIP systems has been realized by many researchers, especially for large-scale networks. The purpose of traffic engineering is to minimize call blocking probability and maximize resource utilization. The current traffic engineering models are inherited from the legacy PSTN world, and these models fall short from capturing the characteristics of new traffic patterns. The objective of this research is to develop a traffic engineering model for modern VoIP networks. We studied the traffic on a large-scale VoIP network and collected several billions of call information. Our analysis shows that the traditional traffic engineering approach based on the Poisson call arrival process and exponential holding time fails to capture the modern telecommunication systems accurately. We developed a new framework for modeling call arrivals as a non-homogeneous Poisson process, and we further enhanced the model by providing a Gaussian approximation for the cases of heavy traffic condition on large-scale networks. In the second phase of the research, we followed a new time-to-event survival analysis approach to model call holding time as a generalized gamma distribution and we introduced a Call Cease Rate function to model the call durations. The modeling and statistical work of the Call Arrival model and the Call Holding Time model is constructed, verified and validated using hundreds of millions of real call information collected from an operational VoIP carrier network. The traffic data is a mixture of residential, business, and wireless traffic. Therefore, our proposed models can be applied to any modern telecommunication system. We also conducted sensitivity analysis of model parameters and performed statistical tests on the robustness of the models’ assumptions.
We implemented the models in a new simulation-based traffic engineering system called VoIP Traffic Engineering Simulator (VSIM). Advanced statistical and stochastic techniques were used in building VSIM system. The core of VSIM is a simulation system that consists of two different simulation engines: the NHPP parametric simulation engine and the non-parametric simulation engine. In addition, VSIM provides several subsystems for traffic data collection, processing, statistical modeling, model parameter estimation, graph generation, and traffic prediction. VSIM is capable of extracting traffic data from a live VoIP network, processing and storing the extracted information, and then feeding it into one of the simulation engines which in turn provides resource optimization and quality of service reports
Dynamic routing on stochastic time-dependent networks using real-time information
In just-in-time (JIT) manufacturing environments, on-time delivery is one of the key performance measures for dispatching and routing of freight vehicles. Both the travel time delay and its variability impact the efficiency of JIT logistics operations, that are becoming more and more common in many industries, and in particular, the automotive industry. In this dissertation, we first propose a framework for dynamic routing of a single vehicle on a stochastic time dependent transportation network using real-time information from Intelligent Transportation Systems (ITS). Then, we consider milk-run deliveries with several pickup and delivery destinations subject to time windows under same network settings. Finally, we extend our dynamic routing models to account for arc traffic condition dependencies on the network.
Recurrent and non-recurrent congestion are the two primary reasons for travel time delay and variability, and their impact on urban transportation networks is growing in recent decades. Hence, our routing methods explicitly account for both recurrent and non-recurrent congestion in the network. In our modeling framework, we develop alternative delay models for both congestion types based on historical data (e.g., velocity, volume, and parameters for incident events) and then integrate these models with the forward-looking routing models. The dynamic nature of our routing decisions exploits the real-time information available from various ITS sources, such as loop sensors.
The forward-looking traffic dynamic models for individual arcs are based on congestion states and state transitions driven by time-dependent Markov chains. We propose effective methods for estimation of the parameters of these Markov chains. Based on vehicle location, time of day, and current and projected network congestion states, we generate dynamic routing policies using stochastic dynamic programming formulations.
All algorithms are tested in simulated networks of Southeast-Michigan and Los Angeles, CA freeways and highways using historical traffic data from the Michigan ITS Center, Traffic.com, and Caltrans PEMS
- …