90 research outputs found

    Downstream Bandwidth Management for Emerging DOCSIS-based Networks

    Get PDF
    In this dissertation, we consider the downstream bandwidth management in the context of emerging DOCSIS-based cable networks. The latest DOCSIS 3.1 standard for cable access networks represents a significant change to cable networks. For downstream, the current 6 MHz channel size is replaced by a much larger 192 MHz channel which potentially can provide data rates up to 10 Gbps. Further, the current standard requires equipment to support a relatively new form of active queue management (AQM) referred to as delay-based AQM. Given that more than 50 million households (and climbing) use cable for Internet access, a clear understanding of the impacts of bandwidth management strategies used in these emerging networks is crucial. Further, given the scope of the change provided by emerging cable systems, now is the time to develop and introduce innovative new methods for managing bandwidth. With this motivation, we address research questions pertaining to next generation of cable access networks. The cable industry has had to deal with the problem of a small number of subscribers who utilize the majority of network resources. This problem will grow as access rates increase to gigabits per second. Fundamentally this is a problem on how to manage data flows in a fair manner and provide protection. A well known performance issue in the Internet, referred to as bufferbloat, has received significant attention recently. High throughput network flows need sufficiently large buffer to keep the pipe full and absorb occasional burstiness. Standard practice however has led to equipment offering very large unmanaged buffers that can result in sustained queue levels increasing packet latency. One reason why these problems continue to plague cable access networks is the desire for low complexity and easily explainable (to access network subscribers and to the Federal Communications Commission) bandwidth management. This research begins by evaluating modern delay-based AQM algorithms in downstream DOCSIS 3.0 environments with a focus on fairness and application performance capabilities of single queue AQMs. We are especially interested in delay-based AQM schemes that have been proposed to combat the bufferbloat problem. Our evaluation involves a variety of scenarios that include tiered services and application workloads. Based on our results, we show that in scenarios involving realistic workloads, modern delay-based AQMs can effectively mitigate bufferbloat. However they do not address the other problem related to managing the fairness. To address the combined problem of fairness and bufferbloat, we propose a novel approach to bandwidth management that provides a compromise among the conflicting requirements. We introduce a flow quantization method referred to as adaptive bandwidth binning where flows that are observed to consume similar levels of bandwidth are grouped together with the system managed through a hierarchical scheduler designed to approximate weighted fairness while addressing bufferbloat. Based on a simulation study that considers many system experimental parameters including workloads and network configurations, we provide evidence of the efficacy of the idea. Our results suggest that the scheme is able to provide long term fairness and low delay with a performance close to that of a reference approach based on fair queueing. A further contribution is our idea for replacing `tiered\u27 levels of service based on service rates with tiering based on weights. The application of our bandwidth binning scheme offers a timely and innovative alternative to broadband service that leverages the potential offered by emerging DOCSIS-based cable systems

    Providing Fairness Through Detection and Preferential Dropping of High Bandwidth Unresponsive Flows

    Get PDF
    Stability of the Internet today depends largely on cooperation between end hosts that employ TCP (Transmission Control Protocol) protocol in the transport layer, and network routers along an end-to-end path. However, in the past several years, various types of traffic, including streaming media applications, are increasingly deployed over the Internet. Such types of traffic are mostly based on UDP (User Datagram Protocol) and usually do not employ neither end-to-end congestion norflow control mechanism, or else very limited. Such applications could unfairly consume greater amount of bandwidth than competing responsive flows such as TCP traffic. In this manner, unfairness problem and congestion collapse could occur. To avoid substantial memory requirement and complexity, fair Active Queue Management (AQM) utilizing no or partial flow state information were proposed in the past several years to solve these problems. These schemes however exhibit several problems under different circumstances.This dissertation presents two fair AQM mechanisms, BLACK and AFC, that overcome the problems and the limitations of the existing schemes. Both BLACK and AFC need to store only a small amount of state information to maintain and exercise its fairness mechanism. Extensive simulation studies show that both schemes outperform the other schemes in terms of throughput fairness under a large number of scenarios. Not only able to handle multiple unresponsive traffic, but the fairness among TCP connections with different round trip delays is also improved. AFC, with a little overhead than BLACK, provides additional advantages with an ability to achieve good fairness under a scenario with traffic of diff21erent sizes and bursty traffic, and provide smoother transfer rates for the unresponsive flows that are usually transmitting real-time traffic.This research also includes the comparative study of the existing techniques to estimate the number of active flows which is a crucial component for some fair AQM schemes including BLACK and AFC. Further contribution presented in this dissertation is the first comprehensive evaluation of fair AQM schemes under the presence of various type of TCP friendly traffic

    Fair and efficient router congestion control

    Get PDF
    Congestion is a natural phenomenon in any network queuing system, and is unavoidable if the queuing system is operated near capacity. In this paper we study how to set the rules of a queuing system so that all the users have a self-interest in controlling congestion when it happens. Routers in the internet respond to local congestion by dropping packets. But if packets are dropped indiscriminately, the effect can be to encourage senders to actually increase their transmission rates, worsening the congestion and destabilizing the system. Alternatively, and only slightly more preferably, the effect can be to arbitrarily let a few insistent senders take over most of the router capacity. We approach this problem from first principles: a router packet-dropping protocol is a mechanism that sets up a game between the senders, who are in turn competing for link capacity. Our task is to design this mechanism so that the game equilibrium is desirable: high total rate is achieved and is shared widely among all senders. In addition, equilibrium should be reestablished quickly in response to changes in transmission rates. Our solution is based upon auction theory: in principle, although not always in practice, we drop packets of the highest-rate sender, in case of congestion. We will prove the game-theoretic merits of our method. We'll also describe a variant of the method with some further advantages that will be supported by network simulations

    Pricing and Unresponsive Flows Purging for Global Rate Enhancement

    Get PDF

    A study on fairness and latency issues over high speed networks and data center networks

    Get PDF
    Newly emerging computer networks, such as high speed networks and data center networks, have characteristics of high bandwidth and high burstiness which make it difficult to address issues such as fairness, queuing latency and link utilization. In this study, we first conduct extensive experimental evaluation of the performance of 10Gbps high speed networks. We found inter-protocol unfairness and larger queuing latency are two outstanding issues in high speed networks and data center networks. There have been several proposals to address fairness and latency issues at switch level via queuing schemes. These queuing schemes have been fairly successful in addressing either fairness issue or large latency but not both at the same time. We propose a new queuing scheme called Approximated-Fair and Controlled-Delay (AFCD) queuing scheme that meets following goals for high speed networks: approximated fairness, controlled low queuing delay, high link utilization and simple implementation. The design of AFCD utilizes a novel synergistic approach by forming an alliance between approximated fair queuing and controlled delay queuing. AFCD maintains very small amount of state information in sending rate estimation of flows and makes drop decision based on a target delay of individual flow. We then present FaLL, a Fair and Low Latency queuing scheme that meets stringent performance requirements of data center networks: fair share of bandwidth, low queuing latency, high throughput, and ease of deployment. FaLL uses an efficiency module, a fairness module and a target delay based dropping scheme to meet these goals. Through rigorous experiments on real testbed, we show that FaLL outperforms various peer solutions in variety of network conditions over data center networks

    An adaptive active queue management algorithm in Internet

    Get PDF
    Ce mémoire ne contient pas de résumé

    Minimizing queueing delays in computer networks

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Fair and efficient router congestion control

    Get PDF
    Congestion is a natural phenomenon in any network queuing system, and is unavoidable if the queuing system is operated near capacity. In this paper we study how to set the rules of a queuing system so that all the users have a self-interest in controlling congestion when it happens. Routers in the internet respond to local congestion by dropping packets. But if packets are dropped indiscriminately, the effect can be to encourage senders to actually increase their transmission rates, worsening the congestion and destabilizing the system. Alternatively, and only slightly more preferably, the effect can be to arbitrarily let a few insistent senders take over most of the router capacity. We approach this problem from first principles: a router packet-dropping protocol is a mechanism that sets up a game between the senders, who are in turn competing for link capacity. Our task is to design this mechanism so that the game equilibrium is desirable: high total rate is achieved and is shared widely among all senders. In addition, equilibrium should be reestablished quickly in response to changes in transmission rates. Our solution is based upon auction theory: in principle, although not always in practice, we drop packets of the highest-rate sender, in case of congestion. We will prove the game-theoretic merits of our method. We'll also describe a variant of the method with some further advantages that will be supported by network simulations
    corecore