7 research outputs found

    On-Line End-to-End Congestion Control

    Full text link
    Congestion control in the current Internet is accomplished mainly by TCP/IP. To understand the macroscopic network behavior that results from TCP/IP and similar end-to-end protocols, one main analytic technique is to show that the the protocol maximizes some global objective function of the network traffic. Here we analyze a particular end-to-end, MIMD (multiplicative-increase, multiplicative-decrease) protocol. We show that if all users of the network use the protocol, and all connections last for at least logarithmically many rounds, then the total weighted throughput (value of all packets received) is near the maximum possible. Our analysis includes round-trip-times, and (in contrast to most previous analyses) gives explicit convergence rates, allows connections to start and stop, and allows capacities to change.Comment: Proceedings IEEE Symp. Foundations of Computer Science, 200

    A Fast Distributed Stateless Algorithm for α\alpha-Fair Packing Problems

    Full text link
    Over the past two decades, fair resource allocation problems have received considerable attention in a variety of application areas. However, little progress has been made in the design of distributed algorithms with convergence guarantees for general and commonly used α\alpha-fair allocations. In this paper, we study weighted α\alpha-fair packing problems, that is, the problems of maximizing the objective functions (i) jwjxj1α/(1α)\sum_j w_j x_j^{1-\alpha}/(1-\alpha) when α>0\alpha > 0, α1\alpha \neq 1 and (ii) jwjlnxj\sum_j w_j \ln x_j when α=1\alpha = 1, over linear constraints AxbAx \leq b, x0x\geq 0, where wjw_j are positive weights and AA and bb are non-negative. We consider the distributed computation model that was used for packing linear programs and network utility maximization problems. Under this model, we provide a distributed algorithm for general α\alpha that converges to an ε\varepsilon-approximate solution in time (number of distributed iterations) that has an inverse polynomial dependence on the approximation parameter ε\varepsilon and poly-logarithmic dependence on the problem size. This is the first distributed algorithm for weighted α\alpha-fair packing with poly-logarithmic convergence in the input size. The algorithm uses simple local update rules and is stateless (namely, it allows asynchronous updates, is self-stabilizing, and allows incremental and local adjustments). We also obtain a number of structural results that characterize α\alpha-fair allocations as the value of α\alpha is varied. These results deepen our understanding of fairness guarantees in α\alpha-fair packing allocations, and also provide insight into the behavior of α\alpha-fair allocations in the asymptotic cases α0\alpha\rightarrow 0, α1\alpha \rightarrow 1, and α\alpha \rightarrow \infty.Comment: Added structural results for asymptotic cases of \alpha-fairness (\alpha approaching 0, 1, or infinity), improved presentation, and revised throughou

    Computational Aspects of Game Theory and Microeconomics

    Get PDF
    The purpose of this thesis is to study algorithmic questions that arise in the context of game theory and microeconomics. In particular, we investigate the computational complexity of various economic solution concepts by using and advancing methodologies from the fields of combinatorial optimization and approximation algorithms. We first study the problem of allocating a set of indivisible goods to a set of agents, who express preferences over combinations of items through their utility functions. Several objectives have been considered in the economic literature in different contexts. In fair division theory, a desirable outcome is to minimize the envy or the envy-ratio between any pair of players. We use tools from the theory of linear and integer programming as well as combinatorics to derive new approximation algorithms and hardness results for various types of utility functions. A different objective that has been considered in the context of auctions, is to find an allocation that maximizes the social welfare, i.e., the total utility derived by the agents. We construct reductions from multi-prover proof systems to obtain inapproximability results, given standard assumptions for the utility functions of the agents. We then consider equilibrium concepts in games. We derive the first subexponential algorithm for computing approximate Nash equilibria in 22-player noncooperative games and extend our result to multi-player games. We further propose a second algorithm based on solving polynomial equations over the reals. Both algorithms improve the previously known upper bounds on the complexity of the problem. Finally, we study game theoretic models that have been introduced recently to address incentive issues in Internet routing. A polynomial time algorithm is obtained for computing equilibria in such games, i.e., routing schemes and payoff allocations from which no subset of agents has an incentive to deviate. Our algorithm is based on linear programming duality theory. We also obtain generalizations when the agents have nonlinear utility functions.Ph.D.Committee Chair: Lipton, Richard; Committee Member: Ding, Yan; Committee Member: Duke, Richard; Committee Member: Randall, Dana; Committee Member: Vazirani, Vija

    On-line end-to-end congestion control

    No full text
    Congestion control in the current Internet is accomplished mainly by TCP/IP. To understand the macroscopic network behavior that results from TCP/IP and similar end-to-end protocols, one main analytic technique is to show that the the protocol maximizes some global objective function of the network traffic. We analyze a particular end-to-end MIMD (Multiplicative-Increase, Multiplicative-Decrease) protocol. We show that if all users of the network use the protocol and all connections last for at least logarithmically many rounds, then the total weighted throughput (value of all packets received) is near the maximum possible. Our analysis includes round-trip-times and (in contrast to most previous analyses) gives explicit convergence rates, allows connections to start and stop and allows capacities to change
    corecore