148 research outputs found

    Investigation of delay jitter of heterogeneous traffic in broadband networks

    Get PDF
    Scope and Methodology of Study: A critical challenge for both wired and wireless networking vendors and carrier companies is to be able to accurately estimate the quality of service (QoS) that will be provided based on the network architecture, router/switch topology, and protocol applied. As a result, this thesis focuses on the theoretical analysis of QoS parameters in term of inter-arrival jitter in differentiated services networks by deploying analytic/mathematical modeling technique and queueing theory, where the analytic model is expressed in terms of a set of equations that can be solved to yield the desired delay jitter parameter. In wireless networks with homogeneous traffic, the effects on the delay jitter in reference to the priority control scheme of the ARQ traffic for the two cases of: 1) the ARQ traffic has a priority over the original transmission traffic; and 2) the ARQ traffic has no priority over the original transmission traffic are evaluated. In wired broadband networks with heterogeneous traffic, the jitter analysis is conducted and the algorithm to control its effect is also developed.Findings and Conclusions: First, the results show that high priority packets always maintain the minimum inter-arrival jitter, which will not be affected even in heavy load situation. Second, the Gaussian traffic modeling is applied using the MVA approach to conduct the queue length analysis, and then the jitter analysis in heterogeneous broadband networks is investigated. While for wireless networks with homogeneous traffic, binomial distribution is used to conduct the queue length analysis, which is sufficient and relatively easy compared to heterogeneous traffic. Third, develop a service discipline called the tagged stream adaptive distortion-reducing peak output-rate enforcing to control and avoid the delay jitter increases without bound in heterogeneous broadband networks. Finally, through the analysis provided, the differential services, was proved not only viable, but also effective to control delay jitter. The analytic models that serve as guidelines to assist network system designers in controlling the QoS requested by customer in term of delay jitter

    Simulation and performance of a statistical multiplexer in an ATM network

    Get PDF
    This report examines some of the issues arising m the implementation of statistical multiplexing in a broadband Integrated digital services network (B-ISDN) by analysis and simulation The BISDN concept is introduced and described. A review o f the current areas of research is given along with some of the important issues as they relate to telephone traffic. The report then focuses on the problem o f multiplexing voice traffic. A typical voice source is analysed and the traffic characteristics which result are described. The concept of statistical multiplexing is mtroduced. A review of the current literature studies relating to the problems of analysing multiplexed sources is given, with particular reference to the concept of cell level and burst level queues being separate and disparate components requiring different analytical approaches. Several models are mtroduced including the 3-state model not previously described in the literature. The queue behaviour resulting from a large number of superposed lmes is analysed as a simplified Markov process and the results are used to argue that it is not feasible to provide buffers for nodes which multiplex a large number of low intensity sources. The problem of scaling small models up to realistic situations is discussed. An approach to simulating the problem is described along with algorithms for implementing the basic elements. A senes of results derived from the described simulation are presented and analysed. The report concludes that statistical multiplexing is feasible, but with certain limits as to the type of traffic which can be supported

    A hybrid queueing model for fast broadband networking simulation

    Get PDF
    PhDThis research focuses on the investigation of a fast simulation method for broadband telecommunication networks, such as ATM networks and IP networks. As a result of this research, a hybrid simulation model is proposed, which combines the analytical modelling and event-driven simulation modelling to speeding up the overall simulation. The division between foreground and background traffic and the way of dealing with these different types of traffic to achieve improvement in simulation time is the major contribution reported in this thesis. Background traffic is present to ensure that proper buffering behaviour is included during the course of the simulation experiments, but only the foreground traffic of interest is simulated, unlike traditional simulation techniques. Foreground and background traffic are dealt with in a different way. To avoid the need for extra events on the event list, and the processing overhead, associated with the background traffic, the novel technique investigated in this research is to remove the background traffic completely, adjusting the service time of the queues for the background traffic to compensate (in most cases, the service time for the foreground traffic will increase). By removing the background traffic from the event-driven simulator the number of cell processing events dealt with is reduced drastically. Validation of this approach shows that, overall, the method works well, but the simulation using this method does have some differences compared with experimental results on a testbed. The reason for this is mainly because of the assumptions behind the analytical model that make the modelling tractable. Hence, the analytical model needs to be adjusted. This is done by having a neural network trained to learn the relationship between the input traffic parameters and the output difference between the proposed model and the testbed. Following this training, simulations can be run using the output of the neural network to adjust the analytical model for those particular traffic conditions. The approach is applied to cell scale and burst scale queueing to simulate an ATM switch, and it is also used to simulate an IP router. In all the applications, the method ensures a fast simulation as well as an accurate result

    Analysis of discrete-time queueing systems with multidimensional state space

    Get PDF

    Reduction of bandwidth requirement by traffic dispersion in ATM networks

    Get PDF
    The problem of bandwidth allocation and routing in Virtual Path (VP) based Asynchronous Transfer Mode (ATM) networks was studied. As an efficient way to facilitate the network management, VP concept has been proposed in the literature. Traffic control and resource management are simplified in VP based networks. However, a priori reservation of resources for VP\u27s also reduces the statistical multiplexing gain, resulting in increased Call Blocking Probability (CBP);The focus of this study is on how to reduce CBP (or equivalently, how to improve the bandwidth utilization for a given CBP requirement) by the effective bandwidth allocation and routing algorithms. Equivalent capacity concept was used to calculate the required bandwidth by the call. Each call was represented as a bursty and heterogeneous multimedia traffic;First, the effect of traffic dispersion was explored to achieve more statistical gain. Through this study, it was discovered how the effect of traffic dispersion varies with different traffic characteristics and the number of paths. An efficient routing algorithm, CED, was designed. Since traffic dispersion requires resequencing and extra signaling to set up multiple VC\u27s, it should be used only when it gives significant benefits. This was the basic idea in our design of CED. The algorithm finds an optimal dispersion factor for a call, where the gain balances the dispersion cost. Simulation study showed that the CBP can be significantly reduced by CED;Next, this study provides analysis of the statistical behavior of the traffic seen by individual VP, as a result of traffic dispersion. This analysis is essential in estimating the required capacity of a VP accurately when both multimedia traffic and traffic dispersion are taken into account. Then analytical models have been formulated. The cost effective design and engineering of VP networks requires accurate and tractable mathematical models which capture the important statistical properties of traffic. This study also revealed that the load distribution estimated by equivalent capacity follows Gaussian distribution which is the sum of two jointly Gaussian random variables. For the analysis of load distribution when CED is used, we simplified multiple paths as identical paths using the idea of Approximation by Single Abstract Path (ASAP), and approximated the characteristics of the traffic seen by individual VP. The developed analytical models and approximations were validated in the sense that they agreed with simulation results

    An Adaptive Scheme for Admission Control in ATM Networks

    Get PDF
    This paper presents a real time front-end admission control scheme for ATM networks. A call management scheme which uses the burstiness associated with traffic sources in a heterogeneous ATM environment to effect dynamic assignment of bandwidth is presented. In the proposed scheme, call acceptance is based on an on-line evaluation of the upper bound on cell loss probability which is derived from the estimated distribution of the number of calls arriving. Using this scheme, the negotiated quality of service will be assured when there is no estimation error. The control mechanism is effective when the number of calls is large, and tolerates loose bandwidth enforcement and loose policing control. The proposed approach is very effective in the connection oriented transport of ATM networks where the decision to admit new traffic is based on thea priori knowledge of the state of the route taken by the traffic

    Analysis of jitter due to call-level fluctuations

    Get PDF
    In communication networks used by constant bit rate applications, call-level dynamics (i.e., entering and leaving calls) lead to fluctuations in the load, and therefore also fluctuations in the delay (jitter). By intentionally delaying the packets at the destination, one can transform the perturbed packet stream back into the original periodic stream; in other words: there is a trade off between jitter and delay, in that jitter can be removed at the expense of delay. As a consequence, for streaming applications for which the packet delay should remain below some predefined threshold, it is desirable that the jitter remains small. This paper presents a set of procedures to compute the jitter due to call-level variations. We onsider a network resource shared by a fluctuating set of constant bit rate applications (modelled as periodic sources). As a first step we study the call-level dynamics: supposing that a tagged call sees n0 calls when entering the system, then we compute the probability that at the end of its duration (consisting of, say, i packets) ni calls are present, of which n0,i stem from the original n0 calls. As a second step, we show how to compute the jitter, for given n0, ni, and n0,i; in this analysis generalized Ballot-problems have to be solved. We find an iterative exact solution to these, and explicit approximations and bounds. Then, as a final step, the (packet-level) results of the second step are weighed with the (call-level) probabilities of the first step, thus resulting in the probability distribution of the jitter experienced within the call duration. An explicit Gaussian approximation is proposed. Extensive numerical experiments validate the accuracy of the approximations and bound

    Traffic control mechanisms with cell rate simulation for ATM networks.

    Get PDF
    PhDAbstract not availabl
    • 

    corecore