76 research outputs found

    A duality model of TCP and queue management algorithms

    Get PDF
    We propose a duality model of end-to-end congestion control and apply it to understanding the equilibrium properties of TCP and active queue management schemes. The basic idea is to regard source rates as primal variables and congestion measures as dual variables, and congestion control as a distributed primal-dual algorithm over the Internet to maximize aggregate utility subject to capacity constraints. The primal iteration is carried out by TCP algorithms such as Reno or Vegas, and the dual iteration is carried out by queue management algorithms such as DropTail, RED or REM. We present these algorithms and their generalizations, derive their utility functions, and study their interaction

    A duality model of TCP and queue management algorithms

    Full text link

    An empirical validation of a duality model of TCP and queue management algorithms

    Get PDF
    In this paper we validate through simulations a duality model of TCP and active queue management (AQM) proposed earlier. In this model, TCP and AQM are modeled as carrying out a distributed primal-dual algorithm over the Internet to maximize aggregate source utility. TCP congestion avoidance algorithms, such as Reno and Vegas, iterate on source rates, the primal variable. AQM algorithms, such as RED and REM, iterate on marking probability, the dual variable

    Distributed Large Scale Network Utility Maximization

    Full text link
    Recent work by Zymnis et al. proposes an efficient primal-dual interior-point method, using a truncated Newton method, for solving the network utility maximization (NUM) problem. This method has shown superior performance relative to the traditional dual-decomposition approach. Other recent work by Bickson et al. shows how to compute efficiently and distributively the Newton step, which is the main computational bottleneck of the Newton method, utilizing the Gaussian belief propagation algorithm. In the current work, we combine both approaches to create an efficient distributed algorithm for solving the NUM problem. Unlike the work of Zymnis, which uses a centralized approach, our new algorithm is easily distributed. Using an empirical evaluation we show that our new method outperforms previous approaches, including the truncated Newton method and dual-decomposition methods. As an additional contribution, this is the first work that evaluates the performance of the Gaussian belief propagation algorithm vs. the preconditioned conjugate gradient method, for a large scale problem.Comment: In the International Symposium on Information Theory (ISIT) 200

    Heterogeneous Congestion Control: Efficiency, Fairness and Design

    Get PDF
    When heterogeneous congestion control protocols that react to different pricing signals (e.g. packet loss, queueing delay, ECN marking etc.) share the same network, the current theory based on utility maximization fails to predict the network behavior. Unlike in a homogeneous network, the bandwidth allocation now depends on router parameters and flow arrival patterns. It can be non-unique, inefficient and unfair. This paper has two objectives. First, we demonstrate the intricate behaviors of a heterogeneous network through simulations and present a rigorous framework to help understand its equilibrium efficiency and fairness properties. By identifying an optimization problem associated with every equilibrium, we show that every equilibrium is Pareto efficient and provide an upper bound on efficiency loss due to pricing heterogeneity. On fairness, we show that intra-protocol fairness is still decided by a utility maximization problem while inter-protocol fairness is the part over which we don¿t have control. However it is shown that we can achieve any desirable inter-protocol fairness by properly choosing protocol parameters. Second, we propose a simple slow timescale source-based algorithm to decouple bandwidth allocation from router parameters and flow arrival patterns and prove its feasibility. The scheme needs only local information

    Reverse Engineering TCP/IP-like Networks using Delay-Sensitive Utility Functions

    Get PDF
    TCP/IP can be interpreted as a distributed primal-dual algorithm to maximize aggregate utility over source rates. It has recently been shown that an equilibrium of TCP/IP, if it exists, maximizes the same delay-insensitive utility over both source rates and routes, provided pure congestion prices are used as link costs in the shortest-path calculation of IP. In practice, however, pure dynamic routing is never used and link costs are weighted sums of both static as well as dynamic components. In this paper, we introduce delay-sensitive utility functions and identify a class of utility functions that such a TCP/IP equilibrium optimizes. We exhibit some counter-intuitive properties that any class of delay-sensitive utility functions optimized by TCP/IP necessarily possess. We prove a sufficient condition for global stability of routing updates for general networks. We construct example networks that defy conventional wisdom on the effect of link cost parameters on network stability and utility

    Necessary and sufficient conditions for optimal flow control in multirate multicast networks

    Get PDF
    The authors consider the optimal flow control problem in multirate multicast networks where all receivers of the same multicast group can receive service at different rates with different QoS. The objective is to achieve the fairness transmission rates that maximise the total receiver utility under the capacity constraint of links. They first propose necessary and sufficient conditions for the optimal solution to the problem, and then derive a new optimal flow control strategy using the Lagrangian multiplier method. Like the unicast case, the basic algorithm consists of a link algorithm to update the link price, and a receiver algorithm to adapt the transmission rate according to the link prices along its path. In particular if some groups contain only one receiver and become unicast, the algorithm will degrade to their previously proposed unicast algorithm
    corecore