2,276 research outputs found

    A critical look at power law modelling of the Internet

    Get PDF
    This paper takes a critical look at the usefulness of power law models of the Internet. The twin focuses of the paper are Internet traffic and topology generation. The aim of the paper is twofold. Firstly it summarises the state of the art in power law modelling particularly giving attention to existing open research questions. Secondly it provides insight into the failings of such models and where progress needs to be made for power law research to feed through to actual improvements in network performance.Comment: To appear Computer Communication

    Using Digital Filtration for Hurst Parameter Estimation

    Get PDF
    We present a new method to estimate the Hurst parameter. The method exploits the form of the autocorrelation function for second-order self-similar processes and is based on one-pass digital filtration. We compare the performance and properties of the new method with that of the most common methods

    CoLoRaDe: A Novel Algorithm for Controlling Long-Range Dependent Network Traffic

    Get PDF
    Long-range dependence characteristics have been observed in many natural or physical phenomena. In particular, a significant impact on data network performance has been shown in several papers. Congested Internet situations, where TCP/IP buffers start to fill, show long-range dependent (LRD) self-similar chaotic behaviour. The exponential growth of the number of servers, as well as the number of users, causes the performance of the Internet to be problematic since the LRD traffic has a significant impact on the buffer requirements. The Internet is a large-scale, wide-area network for which the importance of measurement and analysis of traffic is vital. The intensity of the long-range dependence (LRD) of communications network traffic can be measured using the Hurst parameter. A variety of techniques (such as R/S analysis, aggregated variance-time analysis, periodogram analysis, Whittle estimator, Higuchi's method, wavelet-based estimator, absolute moment method, etc.) exist for estimating Hurst exponent but the accuracy of the estimation is still a complicated and controversial issue. Earlier research (Rezaul et al., 2006) introduced a novel estimator called the Hurst exponent from the autocorrelation function (HEAF) and it was shown why lag 2 in HEAF (i.e. HEAF (2)) is considered when estimating LRD of network traffic. HEAF estimates H by a process which is simple, quick and reliable. In this research we extend these concepts by introducing a novel algorithm for controlling the long-range dependence of network traffic, named CoLoRaDe which is shown to reduce the LRD of packet sequences at the router buffer

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship among the traffic offered (in terms of the mean offered load , but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulas that estimate the required capacity cc as a function of the input traffic and the performance target. For the special case of Gaussian input traffic, these formulas reduce to c=M+αVc = M + \alpha V, where directly relates to the performance requirement (as agreed upon in a service level agreement) and VV reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level, the Gaussianity assumption is justified.\ud As estimating MM is relatively straightforward, the remaining open issue concerns the estimation of VV. We argue that particularly if corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of VV is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship between the traffic offered (in terms of the mean offered load M, but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulae that estimate the required capacity C as a function of the input traffic and the performance target. For the special case of Gaussian input traffic these formulae reduce to C = M+V , where directly relates to the performance requirement (as agreed upon in a service level agreement) and V reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level the Gaussianity assumption is justified.\ud As estimating M is relatively straightforward, the remaining open issue concerns the estimation of V . We argue that, particularly if V corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of V is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    Renegotiation based dynamic bandwidth allocation for selfsimilar VBR traffic

    Get PDF
    The provision of QoS to applications traffic depends heavily on how different traffic types are categorized and classified, and how the prioritization of these applications are managed. Bandwidth is the most scarce network resource. Therefore, there is a need for a method or system that distributes an available bandwidth in a network among different applications in such a way that each class or type of traffic receives their constraint QoS requirements. In this dissertation, a new renegotiation based dynamic resource allocation method for variable bit rate (VBR) traffic is presented. First, pros and cons of available off-line methods that are used to estimate selfsimilarity level (represented by Hurst parameter) of a VBR traffic trace are empirically investigated, and criteria to select measurement parameters for online resource management are developed. It is shown that wavelet analysis based methods are the strongest tools in estimation of Hurst parameter with their low computational complexities, compared to the variance-time method and R/S pox plot. Therefore, a temporal energy distribution of a traffic data arrival counting process among different frequency sub-bands is considered as a traffic descriptor, and then a robust traffic rate predictor is developed by using the Haar wavelet analysis. The empirical results show that the new on-line dynamic bandwidth allocation scheme for VBR traffic is superior to traditional dynamic bandwidth allocation methods that are based on adaptive algorithms such as Least Mean Square, Recursive Least Square, and Mean Square Error etc. in terms of high utilization and low queuing delay. Also a method is developed to minimize the number of bandwidth renegotiations to decrease signaling costs on traffic schedulers (e.g. WFQ) and networks (e.g. ATM). It is also quantified that the introduced renegotiation based bandwidth management scheme decreases heavytailedness of queue size distributions, which is an inherent impact of traffic self similarity. The new design increases the achieved utilization levels in the literature, provisions given queue size constraints and minimizes the number of renegotiations simultaneously. This renegotiation -based design is online and practically embeddable into QoS management blocks, edge routers and Digital Subscriber Lines Access Multiplexers (DSLAM) and rate adaptive DSL modems

    A Survey of Performance Evaluation and Control for Self-Similar Network Traffic

    Get PDF
    This paper surveys techniques for the recognition and treatment of self-similar network or internetwork traffic. Various researchers have reported traffic measurements that demonstrate considerable burstiness on a range of time scales with properties of self-similarity. Rapid technological development has widened the scope of network and Internet applications and, in turn, increased traffic volume. The exponential growth of the number of servers, as well as the number of users, causes Internet performance to be problematic as a result of the significant impact that long-range dependent traffic has on buffer requirements. Consequently, accurate and reliable measurement, analysis and control of Internet traffic are vital. The most significant techniques for performance evaluation include theoretical analysis, simulation, and empirical study based on measurement. In this research, we discuss existing and recent developments in performance evaluation and control tools used in network traffic engineering

    A study of self-similar traffic generation for ATM networks

    Get PDF
    This thesis discusses the efficient and accurate generation of self-similar traffic for ATM networks. ATM networks have been developed to carry multiple service categories. Since the traffic on a number of existing networks is bursty, much research focuses on how to capture the characteristics of traffic to reduce the impact of burstiness. Conventional traffic models do not represent the characteristics of burstiness well, but self-similar traffic models provide a closer approximation. Self-similar traffic models have two fundamental properties, long-range dependence and infinite variance, which have been found in a large number of measurements of real traffic. Therefore, generation of self-similar traffic is vital for the accurate simulation of ATM networks. The main starting point for self-similar traffic generation is the production of fractional Brownian motion (FBM) or fractional Gaussian noise (FGN). In this thesis six algorithms are brought together so that their efficiency and accuracy can be assessed. It is shown that the discrete FGN (dPGN) algorithm and the Weierstrass-Mandelbrot (WM) function are the best in terms of accuracy while the random midpoint displacement (RMD) algorithm, successive random addition (SRA) algorithm, and the WM function are superior in terms of efficiency. Three hybrid approaches are suggested to overcome the inefficiency or inaccuracy of the six algorithms. The combination of the dFGN and RMD algorithm was found to be the best in that it can generate accurate samples efficiently and on-the-fly. After generating FBM sample traces, a further transformation needs to be conducted with either the marginal distribution model or the storage model to produce self-similar traffic. The storage model is a better transformation because it provides a more rigorous mathematical derivation and interpretation of physical meaning. The suitability of using selected Hurst estimators, the rescaled adjusted range (R/S) statistic, the variance-time (VT) plot, and Whittle's approximate maximum likelihood estimator (MLE), is also covered. Whittle's MLE is the better estimator, the R/S statistic can only be used as a reference, and the VT plot might misrepresent the actual Hurst value. An improved method for the generation of self-similar traces and their conversion to traffic has been proposed. This, combined with the identification of reliable methods for the estimators of the Hurst parameter, significantly advances the use of self-similar traffic models in ATM network simulation
    corecore