11,288 research outputs found

    Understanding CHOKe: throughput and spatial characteristics

    Get PDF
    A recently proposed active queue management, CHOKe, is stateless, simple to implement, yet surprisingly effective in protecting TCP from UDP flows. We present an equilibrium model of TCP/CHOKe. We prove that, provided the number of TCP flows is large, the UDP bandwidth share peaks at (e+1)/sup -1/=0.269 when UDP input rate is slightly larger than link capacity, and drops to zero as UDP input rate tends to infinity. We clarify the spatial characteristics of the leaky buffer under CHOKe that produce this throughput behavior. Specifically, we prove that, as UDP input rate increases, even though the total number of UDP packets in the queue increases, their spatial distribution becomes more and more concentrated near the tail of the queue, and drops rapidly to zero toward the head of the queue. In stark contrast to a nonleaky FIFO buffer where UDP bandwidth shares would approach 1 as its input rate increases without bound, under CHOKe, UDP simultaneously maintains a large number of packets in the queue and receives a vanishingly small bandwidth share, the mechanism through which CHOKe protects TCP flows

    Counter-intuitive throughput behaviors in networks under end-to-end control

    Get PDF
    It has been shown that as long as traffic sources adapt their rates to aggregate congestion measure in their paths, they implicitly maximize certain utility. In this paper we study some counter-intuitive throughput behaviors in such networks, pertaining to whether a fair allocation is always inefficient and whether increasing capacity always raises aggregate throughput. A bandwidth allocation policy can be defined in terms of a class of utility functions parameterized by a scalar a that can be interpreted as a quantitative measure of fairness. An allocation is fair if alpha is large and efficient if aggregate throughput is large. All examples in the literature suggest that a fair allocation is necessarily inefficient. We characterize exactly the tradeoff between fairness and throughput in general networks. The characterization allows us both to produce the first counter-example and trivially explain all the previous supporting examples. Surprisingly, our counter-example has the property that a fairer allocation is always more efficient. In particular it implies that maxmin fairness can achieve a higher throughput than proportional fairness. Intuitively, we might expect that increasing link capacities always raises aggregate throughput. We show that not only can throughput be reduced when some link increases its capacity, more strikingly, it can also be reduced when all links increase their capacities by the same amount. If all links increase their capacities proportionally, however, throughput will indeed increase. These examples demonstrate the intricate interactions among sources in a network setting that are missing in a single-link topology

    Application-Oriented Flow Control: Fundamentals, Algorithms and Fairness

    Get PDF
    This paper is concerned with flow control and resource allocation problems in computer networks in which real-time applications may have hard quality of service (QoS) requirements. Recent optimal flow control approaches are unable to deal with these problems since QoS utility functions generally do not satisfy the strict concavity condition in real-time applications. For elastic traffic, we show that bandwidth allocations using the existing optimal flow control strategy can be quite unfair. If we consider different QoS requirements among network users, it may be undesirable to allocate bandwidth simply according to the traditional max-min fairness or proportional fairness. Instead, a network should have the ability to allocate bandwidth resources to various users, addressing their real utility requirements. For these reasons, this paper proposes a new distributed flow control algorithm for multiservice networks, where the application's utility is only assumed to be continuously increasing over the available bandwidth. In this, we show that the algorithm converges, and that at convergence, the utility achieved by each application is well balanced in a proportionally (or max-min) fair manner

    Modelling and stability of FAST TCP

    Get PDF
    We introduce a discrete-time model of FAST TCP that fully captures the effect of self-clocking and compare it with the traditional continuous-time model. While the continuous-time model predicts instability for homogeneous sources sharing a single link when feedback delay is large, experiments suggest otherwise. Using the discrete-time model, we prove that FAST TCP is locally asymptotically stable in general networks when all sources have a common round-trip feedback delay, no matter how large the delay is. We also prove global stability for a single bottleneck link in the absence of feedback delay. The techniques developed here are new and applicable to other protocols
    • …
    corecore