41 research outputs found

    Video traffic modeling and delivery

    Get PDF
    Video is becoming a major component of the network traffic, and thus there has been a great interest to model video traffic. It is known that video traffic possesses short range dependence (SRD) and long range dependence (LRD) properties, which can drastically affect network performance. By decomposing a video sequence into three parts, according to its motion activity, Markov-modulated self-similar process model is first proposed to capture autocorrelation function (ACF) characteristics of MPEG video traffic. Furthermore, generalized Beta distribution is proposed to model the probability density functions (PDFs) of MPEG video traffic. It is observed that the ACF of MPEG video traffic fluctuates around three envelopes, reflecting the fact that different coding methods reduce the data dependency by different amount. This observation has led to a more accurate model, structurally modulated self-similar process model, which captures the ACF of the traffic, both SRD and LRD, by exploiting the MPEG structure. This model is subsequently simplified by simply modulating three self-similar processes, resulting in a much simpler model having the same accuracy as the structurally modulated self-similar process model. To justify the validity of the proposed models for video transmission, the cell loss ratios (CLRs) of a server with a limited buffer size driven by the empirical trace are compared to those driven by the proposed models. The differences are within one order, which are hardly achievable by other models, even for the case of JPEG video traffic. In the second part of this dissertation, two dynamic bandwidth allocation algorithms are proposed for pre-recorded and real-time video delivery, respectively. One is based on scene change identification, and the other is based on frame differences. The proposed algorithms can increase the bandwidth utilization by a factor of two to five, as compared to the constant bit rate (CBR) service using peak rate assignment

    Variable bit rate video time-series and scene modeling using discrete-time statistically self-similar systems

    Get PDF
    This thesis investigates the application of discrete-time statistically self-similar (DTSS) systems to modeling of variable bit rate (VBR) video traffic data. The work is motivated by the fact that while VBR video has been characterized as self-similar by various researchers, models based on self-similarity considerations have not been previously studied. Given the relationship between self-similarity and long-range dependence the potential for using DTSS model in applications involving modeling of VBR MPEG video traffic data is presented. This thesis initially explores the characteristic properties of the model and then establishes relationships between the discrete-time self-similar model and fractional order transfer function systems. Using white noise as the input, the modeling approach is presented using least-square fitting technique of the output autocorrelations to the correlations of various VBR video trace sequences. This measure is used to compare the model performance with the performance of other existing models such as Markovian, long-range dependent and M/G/(infinity) . The study shows that using heavy-tailed inputs the output of these models can be used to match both the scene time-series correlations as well as scene density functions. Furthermore, the discrete-time self-similar model is applied to scene classification in VBR MPEG video to provide a demonstration of potential application of discrete-time self-similar models in modeling self-similar and long-range dependent data. Simulation results have shown that the proposed modeling technique is indeed a better approach than several earlier approaches and finds application is areas such as automatic scene classification, estimation of motion intensity and metadata generation for MPEG-7 applications

    Renegotiation based dynamic bandwidth allocation for selfsimilar VBR traffic

    Get PDF
    The provision of QoS to applications traffic depends heavily on how different traffic types are categorized and classified, and how the prioritization of these applications are managed. Bandwidth is the most scarce network resource. Therefore, there is a need for a method or system that distributes an available bandwidth in a network among different applications in such a way that each class or type of traffic receives their constraint QoS requirements. In this dissertation, a new renegotiation based dynamic resource allocation method for variable bit rate (VBR) traffic is presented. First, pros and cons of available off-line methods that are used to estimate selfsimilarity level (represented by Hurst parameter) of a VBR traffic trace are empirically investigated, and criteria to select measurement parameters for online resource management are developed. It is shown that wavelet analysis based methods are the strongest tools in estimation of Hurst parameter with their low computational complexities, compared to the variance-time method and R/S pox plot. Therefore, a temporal energy distribution of a traffic data arrival counting process among different frequency sub-bands is considered as a traffic descriptor, and then a robust traffic rate predictor is developed by using the Haar wavelet analysis. The empirical results show that the new on-line dynamic bandwidth allocation scheme for VBR traffic is superior to traditional dynamic bandwidth allocation methods that are based on adaptive algorithms such as Least Mean Square, Recursive Least Square, and Mean Square Error etc. in terms of high utilization and low queuing delay. Also a method is developed to minimize the number of bandwidth renegotiations to decrease signaling costs on traffic schedulers (e.g. WFQ) and networks (e.g. ATM). It is also quantified that the introduced renegotiation based bandwidth management scheme decreases heavytailedness of queue size distributions, which is an inherent impact of traffic self similarity. The new design increases the achieved utilization levels in the literature, provisions given queue size constraints and minimizes the number of renegotiations simultaneously. This renegotiation -based design is online and practically embeddable into QoS management blocks, edge routers and Digital Subscriber Lines Access Multiplexers (DSLAM) and rate adaptive DSL modems

    Modelling of self-similar teletraffic for simulation

    Get PDF
    Recent studies of real teletraffic data in modern computer networks have shown that teletraffic exhibits self-similar (or fractal) properties over a wide range of time scales. The properties of self-similar teletraffic are very different from the traditional models of teletraffic based on Poisson, Markov-modulated Poisson, and related processes. The use of traditional models in networks characterised by self-similar processes can lead to incorrect conclusions about the performance of analysed networks. These include serious over-estimations of the performance of computer networks, insufficient allocation of communication and data processing resources, and difficulties ensuring the quality of service expected by network users. Thus, full understanding of the self-similar nature in teletraffic is an important issue. Due to the growing complexity of modern telecommunication networks, simulation has become the only feasible paradigm for their performance evaluation. In this thesis, we make some contributions to discrete-event simulation of networks with strongly-dependent, self-similar teletraffic. First, we have evaluated the most commonly used methods for estimating the self-similarity parameter H using appropriately long sequences of data. After assessing properties of available H estimators, we identified the most efficient estimators for practical studies of self-similarity. Next, the generation of arbitrarily long sequences of pseudo-random numbers possessing specific stochastic properties was considered. Various generators of pseudo-random self-similar sequences have been proposed. They differ in computational complexity and accuracy of the self-similar sequences they generate. In this thesis, we propose two new generators of self-similar teletraffic: (i) a generator based on Fractional Gaussian Noise and Daubechies Wavelets (FGN-DW), that is one of the fastest and the most accurate generators so far proposed; and (ii) a generator based on the Successive Random Addition (SRA) algorithm. Our comparative study of sequential and fixed-length self-similar pseudo-random teletraffic generators showed that the FFT, FGN-DW and SRP-FGN generators are the most efficient, both in the sense of accuracy and speed. To conduct simulation studies of telecommunication networks, self-similar processes often need to be transformed into suitable self-similar processes with arbitrary marginal distributions. Thus, the next problem addressed was how well the self-similarity and autocorrelation function of an original self-similar process are preserved when the self-similar sequences are converted into suitable self-similar processes with arbitrary marginal distributions. We also show how pseudo-random self-similar sequences can be applied to produce a model of teletraffic associated with the transmission of VBR JPEG /MPEG video. A combined gamma/Pareto model based on the application of the FGN-DW generator was used to synthesise VBR JPEG /MPEG video traffic. Finally, effects of self-similarity on the behaviour of queueing systems have been investigated. Using M/M/1/∞ as a reference queueing system with no long-range dependence, we have investigated how self-similarity and long-range dependence in arrival processes affect the length of sequential simulations being executed for obtaining steady-state results with the required level of statistical error. Our results show that the finite buffer overflow probability of a queueing system with self-similar input is much greater than the equivalent queueing system with Poisson or a short-range dependent input process, and that the overflow probability increases as the self-similarity parameter approaches one

    Modeling And Dynamic Resource Allocation For High Definition And Mobile Video Streams

    Get PDF
    Video streaming traffic has been surging in the last few years, which has resulted in an increase of its Internet traffic share on a daily basis. The importance of video streaming management has been emphasized with the advent of High Definition: HD) video streaming, as it requires by its nature more network resources. In this dissertation, we provide a better support for managing HD video traffic over both wireless and wired networks through several contributions. We present a simple, general and accurate video source model: Simplified Seasonal ARIMA Model: SAM). SAM is capable of capturing the statistical characteristics of video traces with less than 5% difference from their calculated optimal models. SAM is shown to be capable of modeling video traces encoded with MPEG-4 Part2, MPEG-4 Part10, and Scalable Video Codec: SVC) standards, using various encoding settings. We also provide a large and publicly-available collection of HD video traces along with their analyses results. These analyses include a full statistical analysis of HD videos, in addition to modeling, factor and cluster analyses. These results show that by using SAM, we can achieve up to 50% improvement in video traffic prediction accuracy. In addition, we developed several video tools, including an HD video traffic generator based on our model. Finally, to improve HD video streaming resource management, we present a SAM-based delay-guaranteed dynamic resource allocation: DRA) scheme that can provide up to 32.4% improvement in bandwidth utilization

    Dynamic bandwidth allocation in ATM networks

    Get PDF
    Includes bibliographical references.This thesis investigates bandwidth allocation methodologies to transport new emerging bursty traffic types in ATM networks. However, existing ATM traffic management solutions are not readily able to handle the inevitable problem of congestion as result of the bursty traffic from the new emerging services. This research basically addresses bandwidth allocation issues for bursty traffic by proposing and exploring the concept of dynamic bandwidth allocation and comparing it to the traditional static bandwidth allocation schemes

    Reactive traffic control mechanisms for communication networks with self-similar bandwidth demands

    Get PDF
    Communication network architectures are in the process of being redesigned so that many different services are integrated within the same network. Due to this integration, traffic management algorithms need to balance the requirements of the traffic which the algorithms are directly controlling with Quality of Service (QoS) requirements of other classes of traffic which will be encountered in the network. Of particular interest is one class of traffic, termed elastic traffic, that responds to dynamic feedback from the network regarding the amount of available resources within the network. Examples of this type of traffic include the Available Bit Rate (ABR) service in Asynchronous Transfer Mode (ATM) networks and connections using Transmission Control Protocol (TCP) in the Internet. Both examples aim to utilise available bandwidth within a network. Reactive traffic management, like that which occurs in the ABR service and TCP, depends explicitly on the dynamic bandwidth requirements of other traffic which is currently using the network. In particular, there is significant evidence that a wide range of network traffic, including Ethernet, World Wide Web, Varible Bit Rate video and signalling traffic, is self-similar. The term self-similar refers to the particular characteristic of network traffic to remain bursty over a wide range of time scales. A closely associated characteristic of self-similar traffic is its long-range dependence (LRD), which refers to the significant correlations that occur with the traffic. By utilising these correlations, greater predictability of network traffic can be achieved, and hence the performance of reactive traffic management algorithms can be enhanced. A predictive rate control algorithm, called PERC (Predictive Explicit Rate Control), is proposed in this thesis which is targeted to the ABR service in ATM networks. By incorporating the LRD stochastic structure of background traffic, measurements of the bandwidth requirements of background traffic, and the delay associated with a particular ABR connection, a predictive algorithm is defined which provides explicit rate information that is conveyed to ABR sources. An enhancement to PERC is also described. This algorithm, called PERC+, uses previous control information to correct prediction errors that occur for connections with larger round-trip delay. These algorithms have been extensively analysed with regards to their network performance, and simulation results show that queue lengths and cell loss rates are significantly reduced when these algorithms are deployed. An adaptive version of PERC has also been developed using real-time parameter estimates of self-similar traffic. This has excellent performance compared with standard ABR rate control algorithms such as ERICA. Since PERC and its enhancement PERC+ have explicitly utilised the index of self-similarity, known as the Hurst parameter, the sensitivity of these algorithms to this parameter can be determined analytically. Research work described in this thesis shows that the algorithms have an asymmetric sensitivity to the Hurst parameter, with significant sensitivity in the region where the parameter is underestimated as being close to 0.5. Simulation results reveal the same bias in the performance of the algorithm with regards to the Hurst parameter. In contrast, PERC is insensitive to estimates of the mean, using the sample mean estimator, and estimates of the traffic variance, which is due to the algorithm primarily utilising the correlation structure of the traffic to predict future bandwidth requirements. Sensitivity analysis falls into the area of investigative research, but it naturally leads to the area of robust control, where algorithms are designed so that uncertainty in traffic parameter estimation or modelling can be accommodated. An alternative robust design approach, to the standard maximum entropy approach, is proposed in this thesis that uses the maximum likelihood function to develop the predictive rate controller. The likelihood function defines the proximity of a specific traffic model to the traffic data, and hence gives a measure of the performance of a chosen model. Maximising the likelihood function leads to optimising robust performance, and it is shown, through simulations, that the system performance is close to the optimal performance as compared with maximising the spectral entropy. There is still debate regarding the influence of LRD on network performance. This thesis also considers the question of the influence of LRD on traffic predictability, and demonstrates that predictive rate control algorithms that only use short-term correlations have close performance to algorithms that utilise long-term correlations. It is noted that predictors based on LRD still out-perform ones which use short-term correlations, but that there is Potential simplification in the design of predictors, since traffic predictability can be achieved using short-term correlations. This thesis forms a substantial contribution to the understanding of control in the case where self-similar processes form part of the overall system. Rather than doggedly pursuing self-similar control, a broader view has been taken where the performance of algorithms have been considered from a number of perspectives. A number of different research avenues lead on from this work, and these are outlined
    corecore