89 research outputs found

    Estimation of the parameters of token-buckets in multi-hop environments

    Full text link
    Bandwidth verification in shaping scenarios receives much attention of both operators and clients because of its impact on Quality of Service (QoS). As a result, measuring shapers’ parameters, namely the Committed Information Rate (CIR), Peak Information Rate (PIR) and Maximum Burst Size (MBS), is a relevant issue when it comes to assess QoS. In this paper, we present a novel algorithm, TBCheck, which serves to accurately measure such parameters with minimal intrusiveness. These measurements are the cornerstone for the validation of Service Level Agreements (SLA) with multiple shaping elements along an end-to-end path. As a further outcome of this measurement method, we define a formal taxonomy of multi-hop shaping scenarios. A thorough performance evaluation covering the latter taxonomy shows the advantages of TBCheck compared to other tools in the state of the art, yielding more accurate results even in the presence of cross-traffic. Additionally, our findings show that MBS estimation is unfeasible when the link load is high, regardless the measurement technique, because the token-bucket will always be empty. Consequently, we propose an estimation policy which maximizes the accuracy by measuring CIR during busy hours and PIR and MBS during off-peak hoursThis work was partially supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund under the project Tráfica (MINECO/FEDER TEC2015-69417-C2-1-R

    Token Bucket-based Throughput Constraining in Cross-layer Schedulers

    Full text link
    In this paper we consider upper and lower constraining users' service rates in a slotted, cross-layer scheduler context. Such schedulers often cannot guarantee these bounds, despite the usefulness in adhering to Quality of Service (QoS) requirements, aiding the admission control system or providing different levels of service to users. We approach this problem with a low-complexity algorithm that is easily integrated in any utility function-based cross-layer scheduler. The algorithm modifies the weights of the associated Network Utility Maximization problem, rather than for example applying a token bucket to the scheduler's output or adding constraints in the physical layer. We study the efficacy of the algorithm through simulations with various schedulers from literature and mixes of traffic. The metrics we consider show that we can bound the average service rate within about five slots, for most schedulers. Schedulers whose weight is very volatile are more difficult to constrain.Comment: 11 pages, 10 figures. Presented at 6th International Conference on Computer Science, Engineering and Information. Published in AIRCC http://airccse.org/csit/V9N13.htm

    Theories and Models for Internet Quality of Service

    Get PDF
    We survey recent advances in theories and models for Internet Quality of Service (QoS). We start with the theory of network calculus, which lays the foundation for support of deterministic performance guarantees in networks, and illustrate its applications to integrated services, differentiated services, and streaming media playback delays. We also present mechanisms and architecture for scalable support of guaranteed services in the Internet, based on the concept of a stateless core. Methods for scalable control operations are also briefly discussed. We then turn our attention to statistical performance guarantees, and describe several new probabilistic results that can be used for a statistical dimensioning of differentiated services. Lastly, we review recent proposals and results in supporting performance guarantees in a best effort context. These include models for elastic throughput guarantees based on TCP performance modeling, techniques for some quality of service differentiation without access control, and methods that allow an application to control the performance it receives, in the absence of network support

    Low-Latency Hard Real-Time Communication over Switched Ethernet

    Get PDF
    With the upsurge in the demand for high-bandwidth networked real-time applications in cost-sensitive environments, a key issue is to take advantage of developments of commodity components that offer a multiple of the throughput of classical real-time solutions. It was the starting hypothesis of this dissertation that with fine grained traffic shaping as the only means of node cooperation, it should be possible to achieve lower guaranteed delays and higher bandwidth utilization than with traditional approaches, even though Switched Ethernet does not support policing in the switches as other network architectures do. This thesis presents the application of traffic shaping to Switched Ethernet and validates the hypothesis. It shows, both theoretically and practically, how commodity Switched Ethernet technology can be used for low-latency hard real-time communication, and what operating-system support is needed for an efficient implementation

    Advances in Internet Quality of Service

    Get PDF
    We describe recent advances in theories and architecture that support performance guarantees needed for quality of service networks. We start with deterministic computations and give applications to integrated services, differentiated services, and playback delays. We review the methods used for obtaining a scalable integrated services support, based on the concept of a stateless core. New probabilistic results that can be used for a statistical dimensioning of differentiated services are explained; some are based on classical queuing theory, while others capitalize on the deterministic results. Then we discuss performance guarantees in a best effort context; we review: methods to provide some quality of service in a pure best effort environment; methods to provide some quality of service differentiation without access control, and methods that allow an application to control the performance it receives, in the absence of network support

    Proactive measurement techniques for network monitoring in heterogeneous environments

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones, 201

    Resource dimensioning in a mixed traffic environment

    Get PDF
    An important goal of modern data networks is to support multiple applications over a single network infrastructure. The combination of data, voice, video and conference traffic, each requiring a unique Quality of Service (QoS), makes resource dimensioning a very challenging task. To guarantee QoS by mere over-provisioning of bandwidth is not viable in the long run, as network resources are expensive. The aim of proper resource dimensioning is to provide the required QoS while making optimal use of the allocated bandwidth. Dimensioning parameters used by service providers today are based on best practice recommendations, and are not necessarily optimal. This dissertation focuses on resource dimensioning for the DiffServ network architecture. Four predefined traffic classes, i.e. Real Time (RT), Interactive Business (IB), Bulk Business (BB) and General Data (GD), needed to be dimensioned in terms of bandwidth allocation and traffic regulation. To perform this task, a study was made of the DiffServ mechanism and the QoS requirements of each class. Traffic generators were required for each class to perform simulations. Our investigations show that the dominating Transport Layer protocol for the RT class is UDP, while TCP is mostly used by the other classes. This led to a separate analysis and requirement for traffic models for UDP and TCP traffic. Analysis of real-world data shows that modern network traffic is characterized by long-range dependency, self-similarity and a very bursty nature. Our evaluation of various traffic models indicates that the Multi-fractal Wavelet Model (MWM) is best for TCP due to its ability to capture long-range dependency and self-similarity. The Markov Modulated Poisson Process (MMPP) is able to model occasional long OFF-periods and burstiness present in UDP traffic. Hence, these two models were used in simulations. A test bed was implemented to evaluate performance of the four traffic classes defined in DiffServ. Traffic was sent through the test bed, while delay and loss was measured. For single class simulations, dimensioning values were obtained while conforming to the QoS specifications. Multi-class simulations investigated the effects of statistical multiplexing on the obtained values. Simulation results for various numerical provisioning factors (PF) were obtained. These factors are used to determine the link data rate as a function of the required average bandwidth and QoS. The use of class-based differentiation for QoS showed that strict delay and loss bounds can be guaranteed, even in the presence of very high (up to 90%) bandwidth utilization. Simulation results showed small deviations from best practice recommendation PF values: A value of 4 is currently used for both RT and IB classes, while 2 is used for the BB class. This dissertation indicates that 3.89 for RT, 3.81 for IB and 2.48 for BB achieve the prescribed QoS more accurately. It was concluded that either the bandwidth distribution among classes, or quality guarantees for the BB class should be adjusted since the RT and IB classes over-performed while BB under-performed. The results contribute to the process of resource dimensioning by adding value to dimensioning parameters through simulation rather than mere intuition or educated guessing.Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007.Electrical, Electronic and Computer Engineeringunrestricte
    • …
    corecore