72,513 research outputs found

    On the Relevance of Long-Range Dependence in Network Traffic

    Get PDF
    There is mounting experimental evidence that network traffic processes exhibit ubiquitous properties of self-similarity and long range dependence (LRD), i.e. of correlations over a wide range of time scales. However, there is still considerable debate about how to model such processes and about their impact on network and application performance. In this paper, we argue that much recent modeling work has failed to consider the impact of two important parameters, namely the finite range of time scales of interest in performance evaluation and prediction problems, and the first-order statistics such as the marginal distribution of the process. We introduce and evaluate a model in which these parameters can be easily controlled. Specifically, our model is a modulated fluid traffic model in which the correlation function of the fluid rate is asymptotically second-order self-similar with given Hurst parameter, then drops to zero at a cutoff time lag. We develop a very efficient numerical procedure to evaluate the performance of the single server queue fed with the above fluid input process. We use this procedure to examine the fluid loss rate for a wide range of marginal distributions, Hurst parameters, cutoff lags, and buffer sizes. Our main results are as follows. First, we find that the amount of correlation that needs to be taken into account for performance evaluation depends not only on the correlation structure of the source traffic, but also on time scales specific to the system under study. For example, the time scale associated to a queueing system is a function of the maximum buffer size. Thus for finite buffer queues, we find that the impact on loss of the correlation in the arrival process becomes nil beyond a time scale we refer to as the correlation horizon. Second, we find that loss depends in a crucial way on the marginal distribution of the fluid rate process. Third, our results suggest that reducing loss by buffering is hard. We advocate the use of source traffic control and statistical multiplexing instead

    Revisiting an old friend: On the observability of the relation between Long Range Dependence and Heavy Tail

    Get PDF
    International audienceTaqqu's Theorem plays a fundamental role in Internet traffic modeling, for two reasons: First, its theoretical formulation matches closely and in a meaningful manner some of the key network mechanisms controlling traffic characteristics; Second, it offers a plau- sible explanation for the origin of the long range dependence property in relation with the heavy tail nature of the traffic components. Numerous attempts have since been made to observe its predictions empirically, either from real Internet traffic data or from numerical simulations based on popular traffic models, yet rarely has this resulted in a satisfactory quantitative agreement. This raised in the literature a number of comments and questions, ranging from the adequacy of the theorem to real world data to the relevance of the statistical tools involved in practical analyses. The present contribution aims at studying under which conditions this fundamental theorem can be actually seen at work on real or simulated data. To do so, numerical simulations based on standard traffic models are analyzed in a wavelet framework. The key time scales involved are derived, enabling a discussion of the origin and nature of the difficulties encountered in attempts to empirically observe Taqqu's Theorem

    A critical look at power law modelling of the Internet

    Get PDF
    This paper takes a critical look at the usefulness of power law models of the Internet. The twin focuses of the paper are Internet traffic and topology generation. The aim of the paper is twofold. Firstly it summarises the state of the art in power law modelling particularly giving attention to existing open research questions. Secondly it provides insight into the failings of such models and where progress needs to be made for power law research to feed through to actual improvements in network performance.Comment: To appear Computer Communication

    Optimizing Traffic Lights in a Cellular Automaton Model for City Traffic

    Full text link
    We study the impact of global traffic light control strategies in a recently proposed cellular automaton model for vehicular traffic in city networks. The model combines basic ideas of the Biham-Middleton-Levine model for city traffic and the Nagel-Schreckenberg model for highway traffic. The city network has a simple square lattice geometry. All streets and intersections are treated equally, i.e., there are no dominant streets. Starting from a simple synchronized strategy we show that the capacity of the network strongly depends on the cycle times of the traffic lights. Moreover we point out that the optimal time periods are determined by the geometric characteristics of the network, i.e., the distance between the intersections. In the case of synchronized traffic lights the derivation of the optimal cycle times in the network can be reduced to a simpler problem, the flow optimization of a single street with one traffic light operating as a bottleneck. In order to obtain an enhanced throughput in the model improved global strategies are tested, e.g., green wave and random switching strategies, which lead to surprising results.Comment: 13 pages, 10 figure

    Traffic measurement and analysis

    Get PDF
    Measurement and analysis of real traffic is important to gain knowledge about the characteristics of the traffic. Without measurement, it is impossible to build realistic traffic models. It is recent that data traffic was found to have self-similar properties. In this thesis work traffic captured on the network at SICS and on the Supernet, is shown to have this fractal-like behaviour. The traffic is also examined with respect to which protocols and packet sizes are present and in what proportions. In the SICS trace most packets are small, TCP is shown to be the predominant transport protocol and NNTP the most common application. In contrast to this, large UDP packets sent between not well-known ports dominates the Supernet traffic. Finally, characteristics of the client side of the WWW traffic are examined more closely. In order to extract useful information from the packet trace, web browsers use of TCP and HTTP is investigated including new features in HTTP/1.1 such as persistent connections and pipelining. Empirical probability distributions are derived describing session lengths, time between user clicks and the amount of data transferred due to a single user click. These probability distributions make up a simple model of WWW-sessions
    • …
    corecore