39 research outputs found

    Multiplexing regulated traffic streams: design and performance

    Get PDF
    The main network solutions for supporting QoS rely on traf- fic policing (conditioning, shaping). In particular, for IP networks the IETF has developed Intserv (individual flows regulated) and Diffserv (only ag- gregates regulated). The regulator proposed could be based on the (dual) leaky-bucket mechanism. This explains the interest in network element per- formance (loss, delay) for leaky-bucket regulated traffic. This paper describes a novel approach to the above problem. Explicitly using the correlation structure of the sources’ traffic, we derive approxi- mations for both small and large buffers. Importantly, for small (large) buffers the short-term (long-term) correlations are dominant. The large buffer result decomposes the traffic stream in a stream of constant rate and a periodic impulse stream, allowing direct application of the Brownian bridge approximation. Combining the small and large buffer results by a concave majorization, we propose a simple, fast and accurate technique to statistically multiplex homogeneous regulated sources. To address heterogeneous inputs, we present similarly efficient tech- niques to evaluate the performance of multiple classes of traffic, each with distinct characteristics and QoS requirements. These techniques, applica- ble under more general conditions, are based on optimal resource (band- width and buffer) partitioning. They can also be directly applied to set GPS (Generalized Processor Sharing) weights and buffer thresholds in a shared resource system

    From burstiness characterisation to traffic control strategy : a unified approach to integrated broadbank networks

    Full text link
    The major challenge in the design of an integrated network is the integration and support of a wide variety of applications. To provide the requested performance guarantees, a traffic control strategy has to allocate network resources according to the characteristics of input traffic. Specifically, the definition of traffic characterisation is significant in network conception. In this thesis, a traffic stream is characterised based on a virtual queue principle. This approach provides the necessary link between network resources allocation and traffic control. It is difficult to guarantee performance without prior knowledge of the worst behaviour in statistical multiplexing. Accordingly, we investigate the worst case scenarios in a statistical multiplexer. We evaluate the upper bounds on the probabilities of buffer overflow in a multiplexer, and data loss of an input stream. It is found that in networks without traffic control, simply controlling the utilisation of a multiplexer does not improve the ability to guarantee performance. Instead, the availability of buffer capacity and the degree of correlation among the input traffic dominate the effect on the performance of loss. The leaky bucket mechanism has been proposed to prevent ATM networks from performance degradation due to congestion. We study the leaky bucket mechanism as a regulation element that protects an input stream. We evaluate the optimal parameter settings and analyse the worst case performance. To investigate its effectiveness, we analyse the delay performance of a leaky bucket regulated multiplexer. Numerical results show that the leaky bucket mechanism can provide well-behaved traffic with guaranteed delay bound in the presence of misbehaving traffic. Using the leaky bucket mechanism, a general strategy based on burstiness characterisation, called the LB-Dynamic policy, is developed for packet scheduling. This traffic control strategy is closely related to the allocation of both bandwidth and buffer in each switching node. In addition, the LB-Dynamic policy monitors the allocated network resources and guarantees the network performance of each established connection, irrespective of the traffic intensity and arrival patterns of incoming packets. Simulation studies demonstrate that the LB-Dynamic policy is able to provide the requested service quality for heterogeneous traffic in integrated broadband networks

    Statistical multiplexing and connection admission control in ATM networks

    Get PDF
    Asynchronous Transfer Mode (ATM) technology is widely employed for the transport of network traffic, and has the potential to be the base technology for the next generation of global communications. Connection Admission Control (CAC) is the effective traffic control mechanism which is necessary in ATM networks in order to avoid possible congestion at each network node and to achieve the Quality-of-Service (QoS) requested by each connection. CAC determines whether or not the network should accept a new connection. A new connection will only be accepted if the network has sufficient resources to meet its QoS requirements without affecting the QoS commitments already made by the network for existing connections. The design of a high-performance CAC is based on an in-depth understanding of the statistical characteristics of the traffic sources

    Some aspects of traffic control and performance evaluation of ATM networks

    Get PDF
    The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation

    Architecture for Guaranteed Delay Service in High Speed Networks

    Get PDF
    The increasing importance of network connections coupled with the lack of abundant link capacity suggests that the day when service guarantees are required by individual connections is not far off. In this dissertation we describe a networking architecture that can efficiently provide end-to-end delay guarantees on a per- connection basis. In order to provide any kind of service guarantee it is imperative for the source traffic to be accurately characterized at the ingress to the network. Furthermore, this characterization should be enforceable through the use of a traffic shaper (or similar device). We go one step further and assume an extensive use of traffic shapers at each of the network elements. Reshaping makes the traffic at each node more predictable and therefore simplifies the task of providing efficient delay guarantees to individual connections. The use of per-connection reshapers to regulate traffic at each hop in the network is referred to as a Rate Controlled Service (RCS) discipline. By exploiting some properties of traffic shapers we demonstrate how the per-hop reshaping does not increase the bound on the end-to-end delay experienced by a connection. In particular, we show that an appropriate choice of traffic shaper parameters enables the RCS discipline to provide better end-to- end delay guarantees than any other service discipline known today. The RCS discipline can provide efficient end-to-end delay guarantees to a connection; however, by definition it is not work-conserving. This fact may increase the average delay that is observed by a connection even if there is no congestion in the network. We outline a mechanism by which an RCS discipline can be modified to be work-conserving without sacrificing the efficient end-to-end delay guarantees that can be provided to individual connections. Using the notion of service curves to bound the service process at each network element, we are able to provide an upper bound on the buffers required to ensure zero loss at the network element. Finally, we examine how the RCS discipline can be used in the context of the Guaranteed Services specification that is currently in the process of being standardized by the Internet Engineering Task Force

    On the time scales in video traffic characterization for queueing behavior

    Get PDF
    To guarantee quality of service (QoS) in future integrated service networks, traffic sources must be characterized to capture the traffic characteristics relevant to network performance. Recent studies reveal that multimedia traffic shows burstiness over multiple time scales and long range dependence (LRD). While researchers agree on the importance of traffic correlation there is no agreement on how much correlation should be incorporated into a traffic model for performance estimation and dimensioning of networks. In this article, we present an approach for defining a relevant time scale for the characterization of VER video traffic in the sense of queueing delay. We first consider the Reich formula and characterize traffic by the Piecewise Linear Arrival Envelope Function (PLAEF). We then define the cutoff interval above which the correlation does not affect the queue buildup. The cutoff interval is the upper bound of the time scale which is required for the estimation of queue size and thus the characterization of VER video traffic. We also give a procedure to approximate the empirical PLAEF with a concave function; this significantly simplifies the calculation in the estimation of the cutoff interval and delay bound with little estimation loss. We quantify the relationship between the time scale in the correlation of video traffic and the queue buildup using a set of experiments with traces of MPEG/JPEG-compressed video. We show that the critical interval i.e. the range for the correlation relevant to the queueing delay, depends on the traffic load: as the traffic load increases, the range of the time scale required for estimation for queueing delay also increases. These results offer further insights into the implication of LRD in VER video traffic. (C) 1999 Elsevier Science B.V. Ail rights reserved

    Theories and Models for Internet Quality of Service

    Get PDF
    We survey recent advances in theories and models for Internet Quality of Service (QoS). We start with the theory of network calculus, which lays the foundation for support of deterministic performance guarantees in networks, and illustrate its applications to integrated services, differentiated services, and streaming media playback delays. We also present mechanisms and architecture for scalable support of guaranteed services in the Internet, based on the concept of a stateless core. Methods for scalable control operations are also briefly discussed. We then turn our attention to statistical performance guarantees, and describe several new probabilistic results that can be used for a statistical dimensioning of differentiated services. Lastly, we review recent proposals and results in supporting performance guarantees in a best effort context. These include models for elastic throughput guarantees based on TCP performance modeling, techniques for some quality of service differentiation without access control, and methods that allow an application to control the performance it receives, in the absence of network support

    Performance Management in ATM Networks

    Get PDF
    ATM is representative of the connection-oriented resource provisioning classof protocols. The ATM network is expected to provide end-to-end QoS guaranteesto connections in the form of bounds on delays, errors and/or losses. Performancemanagement involves measurement of QoS parameters, and application of controlmeasures (if required) to improve the QoS provided to connections, or to improvethe resource utilization at switches. QoS provisioning is very important for realtimeconnections in which losses are irrecoverable and delays cause interruptionsin service. QoS of connections on a node is a direct function of the queueing andscheduling on the switch. Most scheduling architectures provide static allocationof resources (scheduling priority, maximum buffer) at connection setup time. Endto-end bounds are obtainable for some schedulers, however these are precluded forheterogeneously composed networks. The resource allocation does not adapt to theQoS provided on connections in real time. In addition, mechanisms to measurethe QoS of a connection in real-time are scarce.In this thesis, a novel framework for performance management is proposed. Itprovides QoS guarantees to real time connections. It comprises of in-service QoSmonitoring mechanisms, a hierarchical scheduling algorithm based on dynamicpriorities that are adaptive to measurements, and methods to tune the schedulers atindividual nodes based on the end-to-end measurements. Also, a novel scheduler isintroduced for scheduling maximum delay sensitive traffic. The worst case analysisfor the leaky bucket constrained traffic arrivals is presented for this scheduler. Thisscheduler is also implemented on a switch and its practical aspects are analyzed.In order to understand the implementability of complex scheduling mechanisms,a comprehensive survey of the state-of-the-art technology used in the industry isperformed. The thesis also introduces a method of measuring the one-way delayand jitter in a connection using in-service monitoring by special cells
    corecore