30 research outputs found

    Maximum Queue Length of a Fluid Model with a Gaussian Input

    Get PDF
    A fractional Brownian queueing model, that is, a fluid model with an input of a fractional Brownian motion, was proposed in the 1990s to capture the self-similarity and long-range dependence observed in Internet traffic. Since then, a Gaussian queueing model, which is a queueing model with an input of a continuous Gaussian process, has received much attention. In this dissertation, a Gaussian queueing model is discussed and the maximum queue length over a time interval [0, t] is analyzed. Under some mild assumptions, it is shown that a limit of the maximum queue length suitably normalized is determined by a suitable function of the asymptotic variance of the Gaussian input. Some Gaussian queueing models, such as a queue with an input of several independent fractional Brownian motions and a queue with an input of an integrated Ornstein-Uhlenbeck process, are discussed as examples. For a fractional Brownian queueing model, the main results extend some related known results in the literature. The results on the maximum queue length provide insights for the occurrence of large excursions, which are also called congestion events, in a queueing process. In the context of a fractional Brownian queueing model the temporal properties of congestion events, such as the duration and the inter-congestion event time, are analyzed. A new method based on a Poisson clumping approximation is proposed to evaluate these properties. By comparing with simulation results, it is illustrated that the proposed methodology produces satisfying results for estimating the temporal properties of congestion events in a fractional Brownian queueing model

    A study of self-similar traffic generation for ATM networks

    Get PDF
    This thesis discusses the efficient and accurate generation of self-similar traffic for ATM networks. ATM networks have been developed to carry multiple service categories. Since the traffic on a number of existing networks is bursty, much research focuses on how to capture the characteristics of traffic to reduce the impact of burstiness. Conventional traffic models do not represent the characteristics of burstiness well, but self-similar traffic models provide a closer approximation. Self-similar traffic models have two fundamental properties, long-range dependence and infinite variance, which have been found in a large number of measurements of real traffic. Therefore, generation of self-similar traffic is vital for the accurate simulation of ATM networks. The main starting point for self-similar traffic generation is the production of fractional Brownian motion (FBM) or fractional Gaussian noise (FGN). In this thesis six algorithms are brought together so that their efficiency and accuracy can be assessed. It is shown that the discrete FGN (dPGN) algorithm and the Weierstrass-Mandelbrot (WM) function are the best in terms of accuracy while the random midpoint displacement (RMD) algorithm, successive random addition (SRA) algorithm, and the WM function are superior in terms of efficiency. Three hybrid approaches are suggested to overcome the inefficiency or inaccuracy of the six algorithms. The combination of the dFGN and RMD algorithm was found to be the best in that it can generate accurate samples efficiently and on-the-fly. After generating FBM sample traces, a further transformation needs to be conducted with either the marginal distribution model or the storage model to produce self-similar traffic. The storage model is a better transformation because it provides a more rigorous mathematical derivation and interpretation of physical meaning. The suitability of using selected Hurst estimators, the rescaled adjusted range (R/S) statistic, the variance-time (VT) plot, and Whittle's approximate maximum likelihood estimator (MLE), is also covered. Whittle's MLE is the better estimator, the R/S statistic can only be used as a reference, and the VT plot might misrepresent the actual Hurst value. An improved method for the generation of self-similar traces and their conversion to traffic has been proposed. This, combined with the identification of reliable methods for the estimators of the Hurst parameter, significantly advances the use of self-similar traffic models in ATM network simulation

    Reactive traffic control mechanisms for communication networks with self-similar bandwidth demands

    Get PDF
    Communication network architectures are in the process of being redesigned so that many different services are integrated within the same network. Due to this integration, traffic management algorithms need to balance the requirements of the traffic which the algorithms are directly controlling with Quality of Service (QoS) requirements of other classes of traffic which will be encountered in the network. Of particular interest is one class of traffic, termed elastic traffic, that responds to dynamic feedback from the network regarding the amount of available resources within the network. Examples of this type of traffic include the Available Bit Rate (ABR) service in Asynchronous Transfer Mode (ATM) networks and connections using Transmission Control Protocol (TCP) in the Internet. Both examples aim to utilise available bandwidth within a network. Reactive traffic management, like that which occurs in the ABR service and TCP, depends explicitly on the dynamic bandwidth requirements of other traffic which is currently using the network. In particular, there is significant evidence that a wide range of network traffic, including Ethernet, World Wide Web, Varible Bit Rate video and signalling traffic, is self-similar. The term self-similar refers to the particular characteristic of network traffic to remain bursty over a wide range of time scales. A closely associated characteristic of self-similar traffic is its long-range dependence (LRD), which refers to the significant correlations that occur with the traffic. By utilising these correlations, greater predictability of network traffic can be achieved, and hence the performance of reactive traffic management algorithms can be enhanced. A predictive rate control algorithm, called PERC (Predictive Explicit Rate Control), is proposed in this thesis which is targeted to the ABR service in ATM networks. By incorporating the LRD stochastic structure of background traffic, measurements of the bandwidth requirements of background traffic, and the delay associated with a particular ABR connection, a predictive algorithm is defined which provides explicit rate information that is conveyed to ABR sources. An enhancement to PERC is also described. This algorithm, called PERC+, uses previous control information to correct prediction errors that occur for connections with larger round-trip delay. These algorithms have been extensively analysed with regards to their network performance, and simulation results show that queue lengths and cell loss rates are significantly reduced when these algorithms are deployed. An adaptive version of PERC has also been developed using real-time parameter estimates of self-similar traffic. This has excellent performance compared with standard ABR rate control algorithms such as ERICA. Since PERC and its enhancement PERC+ have explicitly utilised the index of self-similarity, known as the Hurst parameter, the sensitivity of these algorithms to this parameter can be determined analytically. Research work described in this thesis shows that the algorithms have an asymmetric sensitivity to the Hurst parameter, with significant sensitivity in the region where the parameter is underestimated as being close to 0.5. Simulation results reveal the same bias in the performance of the algorithm with regards to the Hurst parameter. In contrast, PERC is insensitive to estimates of the mean, using the sample mean estimator, and estimates of the traffic variance, which is due to the algorithm primarily utilising the correlation structure of the traffic to predict future bandwidth requirements. Sensitivity analysis falls into the area of investigative research, but it naturally leads to the area of robust control, where algorithms are designed so that uncertainty in traffic parameter estimation or modelling can be accommodated. An alternative robust design approach, to the standard maximum entropy approach, is proposed in this thesis that uses the maximum likelihood function to develop the predictive rate controller. The likelihood function defines the proximity of a specific traffic model to the traffic data, and hence gives a measure of the performance of a chosen model. Maximising the likelihood function leads to optimising robust performance, and it is shown, through simulations, that the system performance is close to the optimal performance as compared with maximising the spectral entropy. There is still debate regarding the influence of LRD on network performance. This thesis also considers the question of the influence of LRD on traffic predictability, and demonstrates that predictive rate control algorithms that only use short-term correlations have close performance to algorithms that utilise long-term correlations. It is noted that predictors based on LRD still out-perform ones which use short-term correlations, but that there is Potential simplification in the design of predictors, since traffic predictability can be achieved using short-term correlations. This thesis forms a substantial contribution to the understanding of control in the case where self-similar processes form part of the overall system. Rather than doggedly pursuing self-similar control, a broader view has been taken where the performance of algorithms have been considered from a number of perspectives. A number of different research avenues lead on from this work, and these are outlined

    Application of learning algorithms to traffic management in integrated services networks.

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN027131 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Modelling of self-similar teletraffic for simulation

    Get PDF
    Recent studies of real teletraffic data in modern computer networks have shown that teletraffic exhibits self-similar (or fractal) properties over a wide range of time scales. The properties of self-similar teletraffic are very different from the traditional models of teletraffic based on Poisson, Markov-modulated Poisson, and related processes. The use of traditional models in networks characterised by self-similar processes can lead to incorrect conclusions about the performance of analysed networks. These include serious over-estimations of the performance of computer networks, insufficient allocation of communication and data processing resources, and difficulties ensuring the quality of service expected by network users. Thus, full understanding of the self-similar nature in teletraffic is an important issue. Due to the growing complexity of modern telecommunication networks, simulation has become the only feasible paradigm for their performance evaluation. In this thesis, we make some contributions to discrete-event simulation of networks with strongly-dependent, self-similar teletraffic. First, we have evaluated the most commonly used methods for estimating the self-similarity parameter H using appropriately long sequences of data. After assessing properties of available H estimators, we identified the most efficient estimators for practical studies of self-similarity. Next, the generation of arbitrarily long sequences of pseudo-random numbers possessing specific stochastic properties was considered. Various generators of pseudo-random self-similar sequences have been proposed. They differ in computational complexity and accuracy of the self-similar sequences they generate. In this thesis, we propose two new generators of self-similar teletraffic: (i) a generator based on Fractional Gaussian Noise and Daubechies Wavelets (FGN-DW), that is one of the fastest and the most accurate generators so far proposed; and (ii) a generator based on the Successive Random Addition (SRA) algorithm. Our comparative study of sequential and fixed-length self-similar pseudo-random teletraffic generators showed that the FFT, FGN-DW and SRP-FGN generators are the most efficient, both in the sense of accuracy and speed. To conduct simulation studies of telecommunication networks, self-similar processes often need to be transformed into suitable self-similar processes with arbitrary marginal distributions. Thus, the next problem addressed was how well the self-similarity and autocorrelation function of an original self-similar process are preserved when the self-similar sequences are converted into suitable self-similar processes with arbitrary marginal distributions. We also show how pseudo-random self-similar sequences can be applied to produce a model of teletraffic associated with the transmission of VBR JPEG /MPEG video. A combined gamma/Pareto model based on the application of the FGN-DW generator was used to synthesise VBR JPEG /MPEG video traffic. Finally, effects of self-similarity on the behaviour of queueing systems have been investigated. Using M/M/1/∞ as a reference queueing system with no long-range dependence, we have investigated how self-similarity and long-range dependence in arrival processes affect the length of sequential simulations being executed for obtaining steady-state results with the required level of statistical error. Our results show that the finite buffer overflow probability of a queueing system with self-similar input is much greater than the equivalent queueing system with Poisson or a short-range dependent input process, and that the overflow probability increases as the self-similarity parameter approaches one

    Traffic Characteristics and Queueing Theory: Implications and Applications to Web Server Systems

    Get PDF
    Businesses rely increasingly on Internet services as the basis of their income. Downtime and poor performance of such services can therefore be directly translated into loss of revenue. In order to plan and design services sufficiently capable of meeting minimumQuality of Service (QoS) requirements and Service Level Agreements(SLA), an understanding of how network traffic and job service demand affect the system is necessary. Traditionally, arrival and service processes have been modelled as Poisson processes. However, research done over the years suggests that the assumption of Poisson traffic is fallible in many cases. This work considers performance of a web server under different traffic and service demand conditions. Moreover, we consider theoretical models of queues, response time formulas derived from this models and their validity for a web server system. We try to make a simple approach to a complex problem by modelling a web server as one simple queueing system. In addition, we investigate the phenomenon known as self-similarity which has been observed in web traffic inter-arrival processes. We have found indications that traffic with identical expectation values for inter-arrival and service time differing in distribution type affects the response time differently. Moreover, classical queueingmodels are found unsuited for doing capacity planning. Instead we suggest ”a worst case scenario” approach in order for service providers to meet service level targets. Much of the previous work within these areas is of a highly mathematical and theoretical nature. We investigate from a more pragmatic viewpoint
    corecore