2,033 research outputs found

    Global Modeling and Prediction of Computer Network Traffic

    Full text link
    We develop a probabilistic framework for global modeling of the traffic over a computer network. This model integrates existing single-link (-flow) traffic models with the routing over the network to capture the global traffic behavior. It arises from a limit approximation of the traffic fluctuations as the time--scale and the number of users sharing the network grow. The resulting probability model is comprised of a Gaussian and/or a stable, infinite variance components. They can be succinctly described and handled by certain 'space-time' random fields. The model is validated against simulated and real data. It is then applied to predict traffic fluctuations over unobserved links from a limited set of observed links. Further, applications to anomaly detection and network management are briefly discussed

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship among the traffic offered (in terms of the mean offered load , but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulas that estimate the required capacity cc as a function of the input traffic and the performance target. For the special case of Gaussian input traffic, these formulas reduce to c=M+αVc = M + \alpha V, where directly relates to the performance requirement (as agreed upon in a service level agreement) and VV reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level, the Gaussianity assumption is justified.\ud As estimating MM is relatively straightforward, the remaining open issue concerns the estimation of VV. We argue that particularly if corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of VV is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    Estimation of traf- fic matrices in the presence of long memory traffic

    Get PDF
    The estimation of traffic matrices in a communications network on the basis of a set of traffic measurements on the network links is a well-known problem, for which a number of solutions have been proposed when the traffic does not show dependence over time, as in the case of the Poisson process. However, extensive measurements campaigns conducted on IP networks have shown that the traffic exhibits long range dependence. Here a method is proposed for the estimation of traffic matrices in the case of long range dependence, and its theoretical properties are studied. Its merits are then evaluated via a simulation study. Finally, an application to real data is provided

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship between the traffic offered (in terms of the mean offered load M, but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulae that estimate the required capacity C as a function of the input traffic and the performance target. For the special case of Gaussian input traffic these formulae reduce to C = M+V , where directly relates to the performance requirement (as agreed upon in a service level agreement) and V reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level the Gaussianity assumption is justified.\ud As estimating M is relatively straightforward, the remaining open issue concerns the estimation of V . We argue that, particularly if V corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of V is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    Systematic inference of the long-range dependence and heavy-tail distribution parameters of ARFIMA models

    Get PDF
    Long-Range Dependence (LRD) and heavy-tailed distributions are ubiquitous in natural and socio-economic data. Such data can be self-similar whereby both LRD and heavy-tailed distributions contribute to the self-similarity as measured by the Hurst exponent. Some methods widely used in the physical sciences separately estimate these two parameters, which can lead to estimation bias. Those which do simultaneous estimation are based on frequentist methods such as Whittle’s approximate maximum likelihood estimator. Here we present a new and systematic Bayesian framework for the simultaneous inference of the LRD and heavy-tailed distribution parameters of a parametric ARFIMA model with non-Gaussian innovations. As innovations we use the α-stable and t-distributions which have power law tails. Our algorithm also provides parameter uncertainty estimates. We test our algorithm using synthetic data, and also data from the Geostationary Operational Environmental Satellite system (GOES) solar X-ray time series. These tests show that our algorithm is able to accurately and robustly estimate the LRD and heavy-tailed distribution parameters

    The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena

    Full text link
    The Internet is the most complex system ever created in human history. Therefore, its dynamics and traffic unsurprisingly take on a rich variety of complex dynamics, self-organization, and other phenomena that have been researched for years. This paper is a review of the complex dynamics of Internet traffic. Departing from normal treatises, we will take a view from both the network engineering and physics perspectives showing the strengths and weaknesses as well as insights of both. In addition, many less covered phenomena such as traffic oscillations, large-scale effects of worm traffic, and comparisons of the Internet and biological models will be covered.Comment: 63 pages, 7 figures, 7 tables, submitted to Advances in Complex System

    Variable bit rate video time-series and scene modeling using discrete-time statistically self-similar systems

    Get PDF
    This thesis investigates the application of discrete-time statistically self-similar (DTSS) systems to modeling of variable bit rate (VBR) video traffic data. The work is motivated by the fact that while VBR video has been characterized as self-similar by various researchers, models based on self-similarity considerations have not been previously studied. Given the relationship between self-similarity and long-range dependence the potential for using DTSS model in applications involving modeling of VBR MPEG video traffic data is presented. This thesis initially explores the characteristic properties of the model and then establishes relationships between the discrete-time self-similar model and fractional order transfer function systems. Using white noise as the input, the modeling approach is presented using least-square fitting technique of the output autocorrelations to the correlations of various VBR video trace sequences. This measure is used to compare the model performance with the performance of other existing models such as Markovian, long-range dependent and M/G/(infinity) . The study shows that using heavy-tailed inputs the output of these models can be used to match both the scene time-series correlations as well as scene density functions. Furthermore, the discrete-time self-similar model is applied to scene classification in VBR MPEG video to provide a demonstration of potential application of discrete-time self-similar models in modeling self-similar and long-range dependent data. Simulation results have shown that the proposed modeling technique is indeed a better approach than several earlier approaches and finds application is areas such as automatic scene classification, estimation of motion intensity and metadata generation for MPEG-7 applications
    corecore