10,813 research outputs found
Towards Finding Efficient Tools for Measuring the Tail Index and Intensity of Long-range Dependent Network Traffic
Many researchers have discussed the effects of heavy-tailedness in network traffic patterns and shown that Internet traffic flows exhibit characteristics of self-similarity that can be explained by the heavy-tailedness of the various distributions involved. Self-similarity and heavy-tailedness are of great importance for network capacity planning purposes in which researchers are interested in developing analytical methods for analysing traffic characteristics. Designers of computing and telecommunication systems are increasingly interested in employing heavy-tailed distributions to generate workloads for use in simulation - although simulations employing such workloads may show unusual characteristics. Congested Internet situations, where TCP/IP buffers start to fill, show long-range dependent (LRD) self-similar chaotic behaviour. Such chaotic behaviour has been found to be present in Internet traffic by many researchers. In this context, the 'Hurst exponent', H, is used as a measure of the degree of long-range dependence. Having a reliable estimator can yield a good insight into traffic behaviour and may eventually lead to improved traffic engineering. In this paper, we describe some of the most useful mechanisms for estimating the tail index of Internet traffic, particularly for distributions having the power law observed in different contexts, and also the performance of the estimators for measuring the intensity of LRD traffic in terms of their accuracy and reliability
A Survey of Performance Evaluation and Control for Self-Similar Network Traffic
This paper surveys techniques for the recognition and treatment of self-similar network or internetwork traffic. Various researchers have reported traffic measurements that demonstrate considerable burstiness on a range of time scales with properties of self-similarity. Rapid technological development has widened the scope of network and Internet applications and, in turn, increased traffic volume. The exponential growth of the number of servers, as well as the number of users, causes Internet performance to be problematic as a result of the significant impact that long-range dependent traffic has on buffer requirements. Consequently, accurate and reliable measurement, analysis and control of Internet traffic are vital. The most significant techniques for performance evaluation include theoretical analysis, simulation, and empirical study based on measurement. In this research, we discuss existing and recent developments in performance evaluation and control tools used in network traffic engineering
Non-Intrusive Measurement in Packet Networks and its Applications
PhDNetwork measurementis becoming increasingly important as a meanst o assesst he performanceo f
packet networks. Network performance can involve different aspects such as availability, link
failure detection etc, but in this thesis, we will focus on Quality of Service (QoS). Among the
metrics used to define QoS, we are particularly interested in end-to-end delay performance.
Recently, the adoption of Service Level Agreements (SLA) between network operators and their
customersh as becomea major driving force behind QoS measurementm: easurementi s necessaryt o
produce evidence of fulfilment of the requirements specified in the SLA.
Many attempts to do QoS based packet level measurement have been based on Active Measurement,
in which the properties of the end-to-end path are tested by adding testing packets generated from
the sending end. The main drawback of active probing is its intrusive nature which causes extraburden
on the network, and has been shown to distort the measured condition of the network. The
other category of network measurement is known as Passive Measurement. In contrast to Active
Measurement, there are no testing packets injected into the network, therefore no intrusion is caused.
The proposed applications using Passive Measurement are currently quite limited. But Passive
Measurement may offer the potential for an entirely different perspective compared with Active
Measurements
In this thesis, the objective is to develop a measurement methodology for the end-to-end delay
performance based on Passive Measurement. We assume that the nodes in a network domain are
accessible.F or example, a network domain operatedb y a single network operator. The novel idea is
to estimate the local per-hop delay distribution based on a hybrid approach (model and
measurement-based)W. ith this approach,t he storagem easurementd ata requirement can be greatly
alleviated and the overhead put in each local node can be minimized, so maintaining the fast
switching operation in a local switcher or router.
Per-hop delay distributions have been widely used to infer QoS at a single local node. However, the
end-to-end delay distribution is more appropriate when quantifying delays across an end-to-end path.
Our approach is to capture every local node's delay distribution, and then the end-to-end delay
distribution can be obtained by convolving the estimated delay distributions. In this thesis, our
algorithm is examined by comparing the proximity of the actual end-to-end delay distribution with
the estimated one obtained by our measurement method under various conditions. e. g. in the
presence of Markovian or Power-law traffic. Furthermore, the comparison between Active
Measurement and our scheme is also studied.
2
Network operators may find our scheme useful when measuring the end-to-end delay performance.
As stated earlier, our scheme has no intrusive effect. Furthermore, the measurement result in the
local node can be re-usable to deduce other paths' end-to-end delay behaviour as long as this local
node is included in the path. Thus our scheme is more scalable compared with active probing
Aeronautical engineering: A continuing bibliography with indexes, supplement 100
This bibliography lists 295 reports, articles, and other documents introduced into the NASA Scientific and Technical Information System in August 1978
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
STOCHASTIC MODELING AND TIME-TO-EVENT ANALYSIS OF VOIP TRAFFIC
Voice over IP (VoIP) systems are gaining increased popularity due to the cost effectiveness, ease of management, and enhanced features and capabilities. Both enterprises and carriers are deploying VoIP systems to replace their TDM-based legacy voice networks. However, the lack of engineering models for VoIP systems has been realized by many researchers, especially for large-scale networks. The purpose of traffic engineering is to minimize call blocking probability and maximize resource utilization. The current traffic engineering models are inherited from the legacy PSTN world, and these models fall short from capturing the characteristics of new traffic patterns. The objective of this research is to develop a traffic engineering model for modern VoIP networks. We studied the traffic on a large-scale VoIP network and collected several billions of call information. Our analysis shows that the traditional traffic engineering approach based on the Poisson call arrival process and exponential holding time fails to capture the modern telecommunication systems accurately. We developed a new framework for modeling call arrivals as a non-homogeneous Poisson process, and we further enhanced the model by providing a Gaussian approximation for the cases of heavy traffic condition on large-scale networks. In the second phase of the research, we followed a new time-to-event survival analysis approach to model call holding time as a generalized gamma distribution and we introduced a Call Cease Rate function to model the call durations. The modeling and statistical work of the Call Arrival model and the Call Holding Time model is constructed, verified and validated using hundreds of millions of real call information collected from an operational VoIP carrier network. The traffic data is a mixture of residential, business, and wireless traffic. Therefore, our proposed models can be applied to any modern telecommunication system. We also conducted sensitivity analysis of model parameters and performed statistical tests on the robustness of the models’ assumptions.
We implemented the models in a new simulation-based traffic engineering system called VoIP Traffic Engineering Simulator (VSIM). Advanced statistical and stochastic techniques were used in building VSIM system. The core of VSIM is a simulation system that consists of two different simulation engines: the NHPP parametric simulation engine and the non-parametric simulation engine. In addition, VSIM provides several subsystems for traffic data collection, processing, statistical modeling, model parameter estimation, graph generation, and traffic prediction. VSIM is capable of extracting traffic data from a live VoIP network, processing and storing the extracted information, and then feeding it into one of the simulation engines which in turn provides resource optimization and quality of service reports
PREDICTING INTERNET TRAFFIC BURSTS USING EXTREME VALUE THEORY
Computer networks play an important role in today’s organization and people life.
These interconnected devices share a common medium and they tend to compete for
it. Quality of Service (QoS) comes into play as to define what level of services users
get. Accurately defining the QoS metrics is thus important.
Bursts and serious deteriorations are omnipresent in Internet and considered as an
important aspects of it. This thesis examines bursts and serious deteriorations in
Internet traffic and applies Extreme Value Theory (EVT) to their prediction and
modelling. EVT itself is a field of statistics that has been in application in fields like
hydrology and finance, with only a recent introduction to the field of
telecommunications. Model fitting is based on real traces from Belcore laboratory
along with some simulated traces based on fractional Gaussian noise and linear
fractional alpha stable motion. QoS traces from University of Napoli are also used in
the prediction stage.
Three methods from EVT are successfully used for the bursts prediction problem.
They are Block Maxima (BM) method, Peaks Over Threshold (POT) method, and RLargest
Order Statistics (RLOS) method. Bursts in internet traffic are predicted using
the above three methods. A clear methodology was developed for the bursts
prediction problem. New metrics for QoS are suggested based on Return Level and
Return Period. Thus, robust QoS metrics can be defined. In turn, a superior QoS will
be obtained that would support mission critical applications
PoissonProb: A new rate-based available bandwidth measurement algorithm.
Accurate available bandwidth measurement is important for network protocols and distributed programs design, traffic optimization, capacity planning, and service verification. Research on measuring available bandwidth falls into two basic classes: the network traffic modeling algorithms and the self-induced algorithms. The self-induced algorithms are based on packet dispersion techniques. The currently available bandwidth measurement algorithms face the problems of distortion of measurement on multi-hop paths, system resource limitations, probe traffic intrusiveness and measurement accuracy. We have developed a new rate-based self-induced algorithm---PoissonProb. The intervals between probe packets of this algorithm are in Poisson distribution format and the algorithm infers the available bandwidth according to the average of probe packets rate. The algorithm has been implemented as the PoissonProb Available Bandwidth (PAB) measurement tool. The PAB tool can be operated in either sender-based or receiver-based mode. (Abstract shortened by UMI.) Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .X56. Source: Masters Abstracts International, Volume: 44-03, page: 1418. Thesis (M.Sc.)--University of Windsor (Canada), 2005
Datacenter Traffic Control: Understanding Techniques and Trade-offs
Datacenters provide cost-effective and flexible access to scalable compute
and storage resources necessary for today's cloud computing needs. A typical
datacenter is made up of thousands of servers connected with a large network
and usually managed by one operator. To provide quality access to the variety
of applications and services hosted on datacenters and maximize performance, it
deems necessary to use datacenter networks effectively and efficiently.
Datacenter traffic is often a mix of several classes with different priorities
and requirements. This includes user-generated interactive traffic, traffic
with deadlines, and long-running traffic. To this end, custom transport
protocols and traffic management techniques have been developed to improve
datacenter network performance.
In this tutorial paper, we review the general architecture of datacenter
networks, various topologies proposed for them, their traffic properties,
general traffic control challenges in datacenters and general traffic control
objectives. The purpose of this paper is to bring out the important
characteristics of traffic control in datacenters and not to survey all
existing solutions (as it is virtually impossible due to massive body of
existing research). We hope to provide readers with a wide range of options and
factors while considering a variety of traffic control mechanisms. We discuss
various characteristics of datacenter traffic control including management
schemes, transmission control, traffic shaping, prioritization, load balancing,
multipathing, and traffic scheduling. Next, we point to several open challenges
as well as new and interesting networking paradigms. At the end of this paper,
we briefly review inter-datacenter networks that connect geographically
dispersed datacenters which have been receiving increasing attention recently
and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial
- …