124 research outputs found
Modelling Internet Traffic Streams with Ga/M/1/K Queuing Systems under Self-similarity
High-intensity concurrent arrivals of request packets in Internet traffic can cause dependence of event-to-event-times of the requests being served, which causes non-memoryless, modelled with heavy-tail distributions unlike common known traffics. The performance of Internet traffic can be examined using analytical models for the purpose of optimizing the system to reduce its operating costs. Therefore, our study examined a Ga/M/1/K Internet queue class (Gamma arrival processes, Ga; with memoryless-Poisson service process, M; a single server, 1, and K waiting room) and proposed specific derivations of its performance indicators. Real-life data of a corporate organisation Internet server was monitored at both peak and off-peak periods of its usage for Internet traffic data analysis. The minimum ‘0’ in the arrival process indicates self-similarity and was assessed using Hurst parameter, H, and their (standard deviation). ‘H’ > 0.5 arrival process in the peak period only, indicates self-similarity. Performance of Ga/M/1/K was compared with various queuing Internet traffic models used in existing literatures. Results showed that the value of the waiting room size for Ga/M/1/K has closest ties with true self-similar model at peak-periods usage of the Internet, which indicates possible concurrent arrival of clients' requests leading to more usage of the waiting room, but with light-tailed queue model at the off-peak periods. Therefore, the proposed Ga/M/1/K model can assist in evaluating the performance of high-intensity self-similar Internet traffic.
Keywords: Internet traffic; self-similarity; Ga/M/1/K model; gamma distributio
Non-Intrusive Measurement in Packet Networks and its Applications
PhDNetwork measurementis becoming increasingly important as a meanst o assesst he performanceo f
packet networks. Network performance can involve different aspects such as availability, link
failure detection etc, but in this thesis, we will focus on Quality of Service (QoS). Among the
metrics used to define QoS, we are particularly interested in end-to-end delay performance.
Recently, the adoption of Service Level Agreements (SLA) between network operators and their
customersh as becomea major driving force behind QoS measurementm: easurementi s necessaryt o
produce evidence of fulfilment of the requirements specified in the SLA.
Many attempts to do QoS based packet level measurement have been based on Active Measurement,
in which the properties of the end-to-end path are tested by adding testing packets generated from
the sending end. The main drawback of active probing is its intrusive nature which causes extraburden
on the network, and has been shown to distort the measured condition of the network. The
other category of network measurement is known as Passive Measurement. In contrast to Active
Measurement, there are no testing packets injected into the network, therefore no intrusion is caused.
The proposed applications using Passive Measurement are currently quite limited. But Passive
Measurement may offer the potential for an entirely different perspective compared with Active
Measurements
In this thesis, the objective is to develop a measurement methodology for the end-to-end delay
performance based on Passive Measurement. We assume that the nodes in a network domain are
accessible.F or example, a network domain operatedb y a single network operator. The novel idea is
to estimate the local per-hop delay distribution based on a hybrid approach (model and
measurement-based)W. ith this approach,t he storagem easurementd ata requirement can be greatly
alleviated and the overhead put in each local node can be minimized, so maintaining the fast
switching operation in a local switcher or router.
Per-hop delay distributions have been widely used to infer QoS at a single local node. However, the
end-to-end delay distribution is more appropriate when quantifying delays across an end-to-end path.
Our approach is to capture every local node's delay distribution, and then the end-to-end delay
distribution can be obtained by convolving the estimated delay distributions. In this thesis, our
algorithm is examined by comparing the proximity of the actual end-to-end delay distribution with
the estimated one obtained by our measurement method under various conditions. e. g. in the
presence of Markovian or Power-law traffic. Furthermore, the comparison between Active
Measurement and our scheme is also studied.
2
Network operators may find our scheme useful when measuring the end-to-end delay performance.
As stated earlier, our scheme has no intrusive effect. Furthermore, the measurement result in the
local node can be re-usable to deduce other paths' end-to-end delay behaviour as long as this local
node is included in the path. Thus our scheme is more scalable compared with active probing
Meeting Real-Time Constraint of Spectrum Management in TV Black-Space Access
The TV set feedback feature standardized in the next generation TV system,
ATSC 3.0, would enable opportunistic access of active TV channels in future
Cognitive Radio Networks. This new dynamic spectrum access approach is named as
black-space access, as it is complementary of current TV white space, which
stands for inactive TV channels. TV black-space access can significantly
increase the available spectrum of Cognitive Radio Networks in populated urban
markets, where spectrum shortage is most severe while TV whitespace is very
limited. However, to enable TV black-space access, secondary user has to
evacuate a TV channel in a timely manner when TV user comes in. Such strict
real-time constraint is an unique challenge of spectrum management
infrastructure of Cognitive Radio Networks. In this paper, the real-time
performance of spectrum management with regard to the degree of centralization
of infrastructure is modeled and tested. Based on collected empirical network
latency and database response time, we analyze the average evacuation time
under four structures of spectrum management infrastructure: fully
distribution, city-wide centralization, national-wide centralization, and
semi-national centralization. The results show that national wide
centralization may not meet the real-time requirement, while semi-national
centralization that use multiple co-located independent spectrum manager can
achieve real-time performance while keep most of the operational advantage of
fully centralized structure.Comment: 9 pages, 7 figures, Technical Repor
Filter Scheduling Function Model In Internet Server: Resource Configuration, Performance Evaluation And Optimal Scheduling
ABSTRACT
FILTER SCHEDULING FUNCTION MODEL IN INTERNET SERVER:
RESOURCE CONFIGURATION, PERFORMANCE EVALUATION AND
OPTIMAL SCHEDULING
by
MINGHUA XU
August 2010
Advisor: Dr. Cheng-Zhong Xu
Major: Computer Engineering
Degree: Doctor of Philosophy
Internet traffic often exhibits a structure with rich high-order statistical properties like selfsimilarity
and long-range dependency (LRD). This greatly complicates the problem of
server performance modeling and optimization. On the other hand, popularity of Internet
has created numerous client-server or peer-to-peer applications, with most of them,
such as online payment, purchasing, trading, searching, publishing and media streaming,
being timing sensitive and/or financially critical. The scheduling policy in Internet servers
is playing central role in satisfying service level agreement (SLA) and achieving savings
and efficiency in operations. The increasing popularity of high-volume performance critical
Internet applications is a challenge for servers to provide individual response-time guarantees.
Existing tools like queuing models in most cases only hold in mean value analysis
under the assumption of simplified traffic structures.
Considering the fact that most Internet applications can tolerate a small percentage of
deadline misses, we define a decay function model characterizes the relationship between
the request delay constraint, deadline misses, and server capacity in a transfer function
based filter system. The model is general for any time-series based or measurement based
processes. Within the model framework, a relationship between server capacity, scheduling
policy, and service deadline is established in formalism. Time-invariant (non-adaptive)
resource allocation policies are design and analyzed in the time domain. For an important
class of fixed-time allocation policies, optimality conditions with respect to the correlation
of input traffic are established. The upper bound for server capacity and service level are derived
with general Chebshev\u27s inequality, and extended to tighter boundaries for unimodal
distributions by using VysochanskiPetunin\u27s inequality.
For traffic with strong LRD, a design and analysis of the decay function model is done
in the frequency domain. Most Internet traffic has monotonically decreasing strength of
variation functions over frequency. For this type of input traffic, it is proved that optimal
schedulers must have a convex structure. Uniform resource allocation is an extreme case
of the convexity and is proved to be optimal for Poisson traffic. With an integration of
the convex-structural principle, an enhance GPS policy improves the service quality significantly.
Furthermore, it is shown that the presence of LRD in the input traffic results
in shift of variation strength from high frequency to lower frequency bands, leading to a
degradation of the service quality.
The model is also extended to support server with different deadlines, and to derive
an optimal time-variant (adaptive) resource allocation policy that minimizes server load
variances and server resource demands. Simulation results show time-variant scheduling
algorithm indeed outperforms time-invariant optimal decay function scheduler.
Internet traffic has two major dynamic factors, the distribution of request size and the
correlation of request arrival process. When applying decay function model as scheduler
to random point process, corresponding two influences for server workload process is revealed
as, first, sizing factor--interaction between request size distribution and scheduling
functions, second, correlation factor--interaction between power spectrum of arrival process
and scheduling function. For the second factor, it is known from this thesis that convex
scheduling function will minimize its impact over server workload. Under the assumption
of homogeneous scheduling function for all requests, it shows that uniform scheduling is
optimal for the sizing factor. Further more, by analyzing the impact from queueing delay
to scheduling function, it shows that queueing larger tasks vs. smaller ones leads to less
reduction in sizing factor, but at the benefit of more decreasing in correlation factor in the
server workload process. This shows the origin of optimality of shortest remain processing
time (SRPT) scheduler
Analysis of GPRS Limitations
The General Packet Radio Service (GPRS) is a new standard for mobile
data communications, which is implemented under the existing
infrastructure of Global System for Mobile Communications (GSM). The
promise capability of handling Internet Protocol traffic enables instant
and constant connection to global network regardless of location and
time. With its packet-based nature, the new technology facilitates new
applications in wireless communications that have not been available
previously. Nonetheless, there are numbers of limitations that have to be
taken into consideration b~fore this technology can be implemented
commercially. Despite all arguments and challenges, the GPRS system is
here to stay and evolving towards the third generation mobile
communications. This report covers the background of the GPRS and
discusses the issues involved in implementing this current technology
besides considering the deployment of third generation networks beyond
GPRS
- …