12,146 research outputs found

    Characteristics of WAP traffic

    Get PDF
    This paper considers the characteristics of Wireless Application Protocol (WAP) traffic. We start by constructing a WAP traffic model by analysing the behaviour of users accessing public WAP sites via a monitoring system. A wide range of different traffic scenarios were considered, but most of these scenarios resolve to one of two basic types. The paper then uses this traffic model to consider the effects of large quantities of WAP traffic on the core network. One traffic characteristic which is of particular interest in network dimensioning is the degree of self-similarity, so the paper looks at the characteristics of aggregated traffic with WAP, Web and packet speech components to estimate its self-similarity. The results indicate that, while WAP traffic alone does not exhibit a significant degree of self-similarity, a combined load from various traffic sources retains almost the same degree of self-similarity as the most self-similar individual source

    Traffic measurement and analysis

    Get PDF
    Measurement and analysis of real traffic is important to gain knowledge about the characteristics of the traffic. Without measurement, it is impossible to build realistic traffic models. It is recent that data traffic was found to have self-similar properties. In this thesis work traffic captured on the network at SICS and on the Supernet, is shown to have this fractal-like behaviour. The traffic is also examined with respect to which protocols and packet sizes are present and in what proportions. In the SICS trace most packets are small, TCP is shown to be the predominant transport protocol and NNTP the most common application. In contrast to this, large UDP packets sent between not well-known ports dominates the Supernet traffic. Finally, characteristics of the client side of the WWW traffic are examined more closely. In order to extract useful information from the packet trace, web browsers use of TCP and HTTP is investigated including new features in HTTP/1.1 such as persistent connections and pipelining. Empirical probability distributions are derived describing session lengths, time between user clicks and the amount of data transferred due to a single user click. These probability distributions make up a simple model of WWW-sessions

    Performance evaluation of an open distributed platform for realistic traffic generation

    Get PDF
    Network researchers have dedicated a notable part of their efforts to the area of modeling traffic and to the implementation of efficient traffic generators. We feel that there is a strong demand for traffic generators capable to reproduce realistic traffic patterns according to theoretical models and at the same time with high performance. This work presents an open distributed platform for traffic generation that we called distributed internet traffic generator (D-ITG), capable of producing traffic (network, transport and application layer) at packet level and of accurately replicating appropriate stochastic processes for both inter departure time (IDT) and packet size (PS) random variables. We implemented two different versions of our distributed generator. In the first one, a log server is in charge of recording the information transmitted by senders and receivers and these communications are based either on TCP or UDP. In the other one, senders and receivers make use of the MPI library. In this work a complete performance comparison among the centralized version and the two distributed versions of D-ITG is presented

    A statistical model of internet traffic.

    Get PDF
    PhDWe present a method to extract a time series (Number of Active Requests (NAR)) from web cache logs which serves as a transport level measurement of internet traffic. This series also reflects the performance or Quality of Service of a web cache. Using time series modelling, we interpret the properties of this kind of internet traffic and its effect on the performance perceived by the cache user. Our preliminary analysis of NAR concludes that this dataset is suggestive of a long-memory self-similar process but is not heavy-tailed. Having carried out more in-depth analysis, we propose a three stage modelling process of the time series: (i) a power transformation to normalise the data, (ii) a polynomial fit to approximate the general trend and (iii) a modelling of the residuals from the polynomial fit. We analyse the polynomial and show that the residual dataset may be modelled as a FARIMA(p, d, q) process. Finally, we use Canonical Variate Analysis to determine the most significant defining properties of our measurements and draw conclusions to categorise the differences in traffic properties between the various caches studied. We show that the strongest illustration of differences between the caches is shown by the short memory parameters of the FARIMA fit. We compare the differences revealed between our studied caches and draw conclusions on them. Several programs have been written in Perl and S programming languages for this analysis including totalqd.pl for NAR calculation, fullanalysis for general statistical analysis of the data and armamodel for FARIMA modelling

    Stochastic Dynamic Programming and Stochastic Fluid-Flow Models in the Design and Analysis of Web-Server Farms

    Get PDF
    A Web-server farm is a specialized facility designed specifically for housing Web servers catering to one or more Internet facing Web sites. In this dissertation, stochastic dynamic programming technique is used to obtain the optimal admission control policy with different classes of customers, and stochastic uid- ow models are used to compute the performance measures in the network. The two types of network traffic considered in this research are streaming (guaranteed bandwidth per connection) and elastic (shares available bandwidth equally among connections). We first obtain the optimal admission control policy using stochastic dynamic programming, in which, based on the number of requests of each type being served, a decision is made whether to allow or deny service to an incoming request. In this subproblem, we consider a xed bandwidth capacity server, which allocates the requested bandwidth to the streaming requests and divides all of the remaining bandwidth equally among all of the elastic requests. The performance metric of interest in this case will be the blocking probability of streaming traffic, which will be computed in order to be able to provide Quality of Service (QoS) guarantees. Next, we obtain bounds on the expected waiting time in the system for elastic requests that enter the system. This will be done at the server level in such a way that the total available bandwidth for the requests is constant. Trace data will be converted to an ON-OFF source and fluid- flow models will be used for this analysis. The results are compared with both the mean waiting time obtained by simulating real data, and the expected waiting time obtained using traditional queueing models. Finally, we consider the network of servers and routers within the Web farm where data from servers flows and merges before getting transmitted to the requesting users via the Internet. We compute the waiting time of the elastic requests at intermediate and edge nodes by obtaining the distribution of the out ow of the upstream node. This out ow distribution is obtained by using a methodology based on minimizing the deviations from the constituent in flows. This analysis also helps us to compute waiting times at different bandwidth capacities, and hence obtain a suitable bandwidth to promise or satisfy the QoS guarantees. This research helps in obtaining performance measures for different traffic classes at a Web-server farm so as to be able to promise or provide QoS guarantees; while at the same time helping in utilizing the resources of the server farms efficiently, thereby reducing the operational costs and increasing energy savings

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Self-Similarity in a multi-stage queueing ATM switch fabric

    Get PDF
    Recent studies of digital network traffic have shown that arrival processes in such an environment are more accurately modeled as a statistically self-similar process, rather than as a Poisson-based one. We present a simulation of a combination sharedoutput queueing ATM switch fabric, sourced by two models of self-similar input. The effect of self-similarity on the average queue length and cell loss probability for this multi-stage queue is examined for varying load, buffer size, and internal speedup. The results using two self-similar input models, Pareto-distributed interarrival times and a Poisson-Zeta ON-OFF model, are compared with each other and with results using Poisson interarrival times and an ON-OFF bursty traffic source with Ge ometrically distributed burst lengths. The results show that at a high utilization and at a high degree of self-similarity, switch performance improves slowly with increasing buffer size and speedup, as compared to the improvement using Poisson-based traffic
    corecore