2,815 research outputs found

    TCP throughput guarantee in the DiffServ Assured Forwarding service: what about the results?

    Get PDF
    Since the proposition of Quality of Service architectures by the IETF, the interaction between TCP and the QoS services has been intensively studied. This paper proposes to look forward to the results obtained in terms of TCP throughput guarantee in the DiffServ Assured Forwarding (DiffServ/AF) service and to present an overview of the different proposals to solve the problem. It has been demonstrated that the standardized IETF DiffServ conditioners such as the token bucket color marker and the time sliding window color maker were not good TCP traffic descriptors. Starting with this point, several propositions have been made and most of them presents new marking schemes in order to replace or improve the traditional token bucket color marker. The main problem is that TCP congestion control is not designed to work with the AF service. Indeed, both mechanisms are antagonists. TCP has the property to share in a fair manner the bottleneck bandwidth between flows while DiffServ network provides a level of service controllable and predictable. In this paper, we build a classification of all the propositions made during these last years and compare them. As a result, we will see that these conditioning schemes can be separated in three sets of action level and that the conditioning at the network edge level is the most accepted one. We conclude that the problem is still unsolved and that TCP, conditioned or not conditioned, remains inappropriate to the DiffServ/AF service

    Implementation and performance analysis of a QoS-aware TFRC mechanism

    Get PDF
    This paper deals with the improvement of transport protocol behaviour over the DiffServ Assured Forwarding (AF)class. The Assured Service (AS) provides a minimum throughput guarantee that classical congestion control mechanisms, like window-based in TCP or equation-based in TCP-Friendly Rate Control (TFRC), are not able to use efficiently. In response, this paper proposes a performance analysis of a QoS aware congestion control mechanism, named gTFRC, which improves the delivery of continuous streams. The gTFRC (guaranteed TFRC) mechanism has been integrated into an Enhanced Transport Protocol (ETP) that allows protocol mechanisms to be dynamically managed and controlled. After comparing a ns-2 simulation and our implementation of the basic TFRC mechanism, we show that ETP/gTFRC extension is able to reach a minimum throughput guarantee whatever the flow’s RTT and target rate (TR) and the network provisioning conditions

    GTFRC, a TCP friendly QoS-aware rate control for diffserv assured service

    Get PDF
    This study addresses the end-to-end congestion control support over the DiffServ Assured Forwarding (AF) class. The resulting Assured Service (AS) provides a minimum level of throughput guarantee. In this context, this article describes a new end-to-end mechanism for continuous transfer based on TCP-Friendly Rate Control (TFRC). The proposed approach modifies TFRC to take into account the QoS negotiated. This mechanism, named gTFRC, is able to reach the minimum throughput guarantee whatever the flow’s RTT and target rate. Simulation measurements and implementation over a real QoS testbed demonstrate the efficiency of this mechanism either in over-provisioned or exactly-provisioned network. In addition, we show that the gTFRC mechanism can be used in the same DiffServ/AF class with TCP or TFRC flows

    Differentiated Predictive Fair Service for TCP Flows

    Full text link
    The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.National Science Foundation (CAREER ANI-0096045, MRI EIA-9871022

    QoSatAr: a cross-layer architecture for E2E QoS provisioning over DVB-S2 broadband satellite systems

    Get PDF
    This article presents QoSatAr, a cross-layer architecture developed to provide end-to-end quality of service (QoS) guarantees for Internet protocol (IP) traffic over the Digital Video Broadcasting-Second generation (DVB-S2) satellite systems. The architecture design is based on a cross-layer optimization between the physical layer and the network layer to provide QoS provisioning based on the bandwidth availability present in the DVB-S2 satellite channel. Our design is developed at the satellite-independent layers, being in compliance with the ETSI-BSM-QoS standards. The architecture is set up inside the gateway, it includes a Re-Queuing Mechanism (RQM) to enhance the goodput of the EF and AF traffic classes and an adaptive IP scheduler to guarantee the high-priority traffic classes taking into account the channel conditions affected by rain events. One of the most important aspect of the architecture design is that QoSatAr is able to guarantee the QoS requirements for specific traffic flows considering a single parameter: the bandwidth availability which is set at the physical layer (considering adaptive code and modulation adaptation) and sent to the network layer by means of a cross-layer optimization. The architecture has been evaluated using the NS-2 simulator. In this article, we present evaluation metrics, extensive simulations results and conclusions about the performance of the proposed QoSatAr when it is evaluated over a DVB-S2 satellite scenario. The key results show that the implementation of this architecture enables to keep control of the satellite system load while guaranteeing the QoS levels for the high-priority traffic classes even when bandwidth variations due to rain events are experienced. Moreover, using the RQM mechanism the user’s quality of experience is improved while keeping lower delay and jitter values for the high-priority traffic classes. In particular, the AF goodput is enhanced around 33% over the drop tail scheme (on average)

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial
    corecore