59 research outputs found
Filter Scheduling Function Model In Internet Server: Resource Configuration, Performance Evaluation And Optimal Scheduling
ABSTRACT
FILTER SCHEDULING FUNCTION MODEL IN INTERNET SERVER:
RESOURCE CONFIGURATION, PERFORMANCE EVALUATION AND
OPTIMAL SCHEDULING
by
MINGHUA XU
August 2010
Advisor: Dr. Cheng-Zhong Xu
Major: Computer Engineering
Degree: Doctor of Philosophy
Internet traffic often exhibits a structure with rich high-order statistical properties like selfsimilarity
and long-range dependency (LRD). This greatly complicates the problem of
server performance modeling and optimization. On the other hand, popularity of Internet
has created numerous client-server or peer-to-peer applications, with most of them,
such as online payment, purchasing, trading, searching, publishing and media streaming,
being timing sensitive and/or financially critical. The scheduling policy in Internet servers
is playing central role in satisfying service level agreement (SLA) and achieving savings
and efficiency in operations. The increasing popularity of high-volume performance critical
Internet applications is a challenge for servers to provide individual response-time guarantees.
Existing tools like queuing models in most cases only hold in mean value analysis
under the assumption of simplified traffic structures.
Considering the fact that most Internet applications can tolerate a small percentage of
deadline misses, we define a decay function model characterizes the relationship between
the request delay constraint, deadline misses, and server capacity in a transfer function
based filter system. The model is general for any time-series based or measurement based
processes. Within the model framework, a relationship between server capacity, scheduling
policy, and service deadline is established in formalism. Time-invariant (non-adaptive)
resource allocation policies are design and analyzed in the time domain. For an important
class of fixed-time allocation policies, optimality conditions with respect to the correlation
of input traffic are established. The upper bound for server capacity and service level are derived
with general Chebshev\u27s inequality, and extended to tighter boundaries for unimodal
distributions by using VysochanskiPetunin\u27s inequality.
For traffic with strong LRD, a design and analysis of the decay function model is done
in the frequency domain. Most Internet traffic has monotonically decreasing strength of
variation functions over frequency. For this type of input traffic, it is proved that optimal
schedulers must have a convex structure. Uniform resource allocation is an extreme case
of the convexity and is proved to be optimal for Poisson traffic. With an integration of
the convex-structural principle, an enhance GPS policy improves the service quality significantly.
Furthermore, it is shown that the presence of LRD in the input traffic results
in shift of variation strength from high frequency to lower frequency bands, leading to a
degradation of the service quality.
The model is also extended to support server with different deadlines, and to derive
an optimal time-variant (adaptive) resource allocation policy that minimizes server load
variances and server resource demands. Simulation results show time-variant scheduling
algorithm indeed outperforms time-invariant optimal decay function scheduler.
Internet traffic has two major dynamic factors, the distribution of request size and the
correlation of request arrival process. When applying decay function model as scheduler
to random point process, corresponding two influences for server workload process is revealed
as, first, sizing factor--interaction between request size distribution and scheduling
functions, second, correlation factor--interaction between power spectrum of arrival process
and scheduling function. For the second factor, it is known from this thesis that convex
scheduling function will minimize its impact over server workload. Under the assumption
of homogeneous scheduling function for all requests, it shows that uniform scheduling is
optimal for the sizing factor. Further more, by analyzing the impact from queueing delay
to scheduling function, it shows that queueing larger tasks vs. smaller ones leads to less
reduction in sizing factor, but at the benefit of more decreasing in correlation factor in the
server workload process. This shows the origin of optimality of shortest remain processing
time (SRPT) scheduler
A study of self-similar traffic generation for ATM networks
This thesis discusses the efficient and accurate generation of self-similar traffic for ATM networks. ATM networks have been developed to carry multiple service categories. Since the traffic on a number of existing networks is bursty, much research focuses on how to capture the characteristics of traffic to reduce the impact of burstiness. Conventional traffic models do not represent the characteristics of burstiness well, but self-similar traffic models provide a closer approximation. Self-similar traffic models have two fundamental properties, long-range dependence and infinite variance, which have been found in a large number of measurements of real traffic. Therefore, generation of self-similar traffic is vital for the accurate simulation of ATM networks. The main starting point for self-similar traffic generation is the production of fractional Brownian motion (FBM) or fractional Gaussian noise (FGN). In this thesis six algorithms are brought together so that their efficiency and accuracy can be assessed. It is shown that the discrete FGN (dPGN) algorithm and the Weierstrass-Mandelbrot (WM) function are the best in terms of accuracy while the random midpoint displacement (RMD) algorithm, successive random addition (SRA) algorithm, and the WM function are superior in terms of efficiency. Three hybrid approaches are suggested to overcome the inefficiency or inaccuracy of the six algorithms. The combination of the dFGN and RMD algorithm was found to be the best in that it can generate accurate samples efficiently and on-the-fly. After generating FBM sample traces, a further transformation needs to be conducted with either the marginal distribution model or the storage model to produce self-similar traffic. The storage model is a better transformation because it provides a more rigorous mathematical derivation and interpretation of physical meaning. The suitability of using selected Hurst estimators, the rescaled adjusted range (R/S) statistic, the variance-time (VT) plot, and Whittle's approximate maximum likelihood estimator (MLE), is also covered. Whittle's MLE is the better estimator, the R/S statistic can only be used as a reference, and the VT plot might misrepresent the actual Hurst value. An improved method for the generation of self-similar traces and their conversion to traffic has been proposed. This, combined with the identification of reliable methods for the estimators of the Hurst parameter, significantly advances the use of self-similar traffic models in ATM network simulation
Recommended from our members
Analytical Modelling of Scheduling Schemes under Self-similar Network Traffic. Traffic Modelling and Performance Analysis of Centralized and Distributed Scheduling Schemes.
High-speed transmission over contemporary communication networks has
drawn many research efforts. Traffic scheduling schemes which play a critical role in
managing network transmission have been pervasively studied and widely
implemented in various practical communication networks. In a sophisticated
communication system, a variety of applications co-exist and require differentiated
Quality-of-Service (QoS). Innovative scheduling schemes and hybrid scheduling
disciplines which integrate multiple traditional scheduling mechanisms have
emerged for QoS differentiation. This study aims to develop novel analytical models
for commonly interested scheduling schemes in communication systems under more
realistic network traffic and use the models to investigate the issues of design and
development of traffic scheduling schemes.
In the open literature, it is commonly recognized that network traffic exhibits
self-similar nature, which has serious impact on the performance of communication
networks and protocols. To have a deep study of self-similar traffic, the real-world
traffic datasets are measured and evaluated in this study. The results reveal that selfsimilar
traffic is a ubiquitous phenomenon in high-speed communication networks
and highlight the importance of the developed analytical models under self-similar
traffic.
The original analytical models are then developed for the centralized
scheduling schemes including the Deficit Round Robin, the hybrid PQGPS which
integrates the traditional Priority Queueing (PQ) and Generalized Processor Sharing (GPS) schemes, and the Automatic Repeat reQuest (ARQ) forward error control
discipline in the presence of self-similar traffic.
Most recently, research on the innovative Cognitive Radio (CR) techniques
in wireless networks is popular. However, most of the existing analytical models still
employ the traditional Poisson traffic to examine the performance of CR involved
systems. In addition, few studies have been reported for estimating the residual
service left by primary users. Instead, extensive existing studies use an ON/OFF
source to model the residual service regardless of the primary traffic. In this thesis, a PQ theory is adopted to investigate and model the possible service left by selfsimilar
primary traffic and derive the queue length distribution of individual
secondary users under the distributed spectrum random access protocol
Contributions to modelling of internet traffic by fractal renewal processes.
The principle of parsimonious modelling of Internet traffic states that a minimal
number of descriptors should be used for its characterization. Until early 1990s,
the conventional Markovian models for voice traffic had been considered suitable
and parsimonious for data traffic as well. Later with the discovery of strong
correlations and increased burstiness in Internet traffic, various self-similar count
models have been proposed. But, in fact, such models are strictly mono-fractal
and applicable at coarse time scales, whereas Internet traffic modelling is about
modelling traffic at fine and coarse time scales; modelling traffic which can be
mono and multi-fractal; modelling traffic at interarrival time and count levels;
modelling traffic at access and core tiers; and modelling all the three structural
components of Internet traffic, that is, packets, flows and sessions.
The philosophy of this thesis can be described as: “the renewal of renewal theory
in Internet traffic modelling”. Renewal theory has a great potential in modelling
statistical characteristics of Internet traffic belonging to individual users, access
and core networks. In this thesis, we develop an Internet traffic modelling
framework based on fractal renewal processes, that is, renewal processes with
underlying distribution of interarrival times being heavy-tailed. The proposed
renewal framework covers packets, flows and sessions as structural components
of Internet traffic and is applicable for modelling the traffic at fine and coarse
time scales. The properties of superposition of renewal processes can be used
to model traffic in higher tiers of the Internet hierarchy. As the framework is
based on renewal processes, therefore, Internet traffic can be modelled at both
interarrival times and count levels
Traffic Characteristics and Queueing Theory: Implications and Applications to Web Server Systems
Businesses rely increasingly on Internet services as the basis of their income. Downtime and poor performance of such services can therefore be directly translated into loss of revenue. In order to plan and design services sufficiently capable of meeting minimumQuality of Service (QoS) requirements and Service Level Agreements(SLA), an understanding of how network traffic and job service demand affect the system is necessary. Traditionally, arrival and service processes have been modelled as Poisson processes. However, research done over the years suggests that the assumption of Poisson traffic is fallible in many cases. This work considers performance of a web server under different traffic and service demand conditions. Moreover, we consider theoretical models of queues, response time formulas derived from this models and their validity for a web server system. We try to make a simple approach to a complex problem by modelling a web server as one simple queueing system. In addition, we investigate the phenomenon known as self-similarity which has been observed in web traffic inter-arrival processes. We have found indications that traffic with identical expectation values for inter-arrival and service time differing in distribution type affects the response time differently. Moreover, classical queueingmodels are found unsuited for doing capacity planning. Instead we suggest ”a worst case scenario” approach in order for service providers to meet service level targets. Much of the previous work within these areas is of a highly mathematical and theoretical nature. We investigate from a more pragmatic viewpoint
Telecommunications Networks
This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing
WAITING TIME AND PATIENTS’ SATISFACTION
In line with Vision 2021, the UAE’s National Agenda has six pillars: providing world-class healthcare is one of them. It is hence not surprising that the UAE healthcare industry is allocating substantial weight to the element of quality. Patient-centered care is internationally becoming part of the quality domain. Patient-centered quality may be defined as “providing the care that the patient needs in the manner the patient desires at the time the patient desires”. This requires substantially more attention to learning about patients’ preferences. One of the main dimensions of patient-centered quality is timely access to care, which includes shorter waiting times and efficient use of physicians’ time. Long waiting time is a globally challenging phenomenon that most healthcare systems face; it is the main topic of this thesis.
The thesis consists of two main studies. The first empirical study was conducted by interviewing a sample of 552 patients to assess their satisfaction with their waiting experience in UAE hospitals. The collected data allowed us to test several hypotheses that were formulated based on an extensive literature study to better understand the relationship between waiting time and certain variables.
In the second study, a simulation model for a typical clinic was built from real data obtained from a public hospital in Abu Dhabi emirate, considering two types of patients’ arrival; by appointment and walk-in, to test the effect of delayed arrivals and number of resources on the waiting time. The objective of the simulation study was to determine effective strategies for reducing the patients’ waiting time. The results of both studies are presented and discussed, with some recommendations, managerial implications, and conclusions
- …