754 research outputs found

    Tradeoffs in worst-case equilibria

    Get PDF
    AbstractWe investigate the problem of routing traffic through a congested network in an environment of non-cooperative users. We use the worst-case coordination ratio suggested by Koutsoupias and Papadimitriou to measure the performance degradation due to the lack of a centralized traffic regulating authority. We provide a full characterization of the worst-case coordination ratio in the restricted assignment and unrelated parallel links model. In particular, we quantify the tradeoff between the “negligibility” of the traffic controlled by each user and the worst-case coordination ratio. We analyze both pure and mixed strategies systems and identify the range where their performance is similar

    A new model for selfish routing

    Get PDF
    AbstractIn this work, we introduce and study a new, potentially rich model for selfish routing over non-cooperative networks, as an interesting hybridization of the two prevailing such models, namely the KPmodel [E. Koutsoupias, C.H. Papadimitriou, Worst-case equilibria, in: G. Meinel, S. Tison (Eds.), Proceedings of the 16th Annual Symposium on Theoretical Aspects of Computer Science, in: Lecture Notes in Computer Science, vol. 1563, Springer-Verlag, 1999, pp. 404–413] and the Wmodel [J.G. Wardrop, Some theoretical aspects of road traffic research, Proceedings of the of the Institute of Civil Engineers 1 (Pt. II) (1952) 325–378].In the hybrid model, each of n users is using a mixed strategy to ship its unsplittable traffic over a network consisting of m parallel links. In a Nash equilibrium, no user can unilaterally improve its Expected Individual Cost. To evaluate Nash equilibria, we introduce Quadratic Social Cost as the sum of the expectations of the latencies, incurred by the squares of the accumulated traffic. This modeling is unlike the KP model, where Social Cost [E. Koutsoupias, C.H. Papadimitriou, Worst-case equilibria, in: G. Meinel, S. Tison (Eds.), Proceedings of the 16th Annual Symposium on Theoretical Aspects of Computer Science, in: Lecture Notes in Computer Science, vol. 1563, Springer-Verlag, 1999, pp. 404–413] is the expectation of the maximum latency incurred by the accumulated traffic; but it is like the W model since the Quadratic Social Cost can be expressed as a weighted sum of Expected Individual Costs. We use the Quadratic Social Cost to define Quadratic Coordination Ratio. Here are our main findings: •Quadratic Social Cost can be computed in polynomial time. This is unlike the #P-completeness [D. Fotakis, S. Kontogiannis, E. Koutsoupias, M. Mavronicolas, P. Spirakis, The structure and complexity of Nash equilibria for a selfish routing game, in: P. Widmayer, F. Triguero, R. Morales, M. Hennessy, S. Eidenbenz, R. Conejo (Eds.), Proceedings of the 29th International Colloquium on Automata, Languages and Programming, in: Lecture Notes in Computer Science, vol. 2380, Springer-Verlag, 2002, pp. 123–134] of computing Social Cost for the KP model.•For the case of identical users and identical links, the fully mixed Nash equilibrium [M. Mavronicolas, P. Spirakis, The price of selfish routing, Algorithmica 48 (1) (2007) 91–126], where each user assigns positive probability to every link, maximizes Quadratic Social Cost.•As our main result, we present a comprehensive collection of tight, constant (that is, independent of m and n), strictly less than 2, lower and upper bounds on the Quadratic Coordination Ratio for several, interesting special cases. Some of the bounds stand in contrast to corresponding super-constant bounds on the Coordination Ratio previously shown in [A. Czumaj, B. Vöcking, Tight bounds for worst-case equilibria, ACM Transactions on Algorithms 3 (1) (2007); E. Koutsoupias, M. Mavronicolas, P. Spirakis, Approximate equilibria and ball fusion, Theory of Computing Systems 36 (6) (2003) 683–693; E. Koutsoupias, C.H. Papadimitriou, Worst-case equilibria, in: G. Meinel, S. Tison (Eds.), Proceedings of the 16th Annual Symposium on Theoretical Aspects of Computer Science, in: Lecture Notes in Computer Science, vol. 1563, Springer-Verlag, 1999, pp. 404–413; M. Mavronicolas, P. Spirakis, The price of selfish routing, Algorithmica 48 (1) (2007) 91–126] for the KP model

    The Price of Anarchy for Minsum Related Machine Scheduling

    Get PDF
    We address the classical uniformly related machine scheduling problem with minsum objective. The problem is solvable in polynomial time by the algorithm of Horowitz and Sahni. In that solution, each machine sequences its jobs shortest first. However when jobs may choose the machine on which they are processed, while keeping the same sequencing rule per machine, the resulting Nash equilibria are in general not optimal. The price of anarchy measures this optimality gap. By means of a new characterization of the optimal solution, we show that the price of anarchy in this setting is bounded from above by 2. We also give a lower bound of e/(e-1). This complements recent results on the price of anarchy for the more general unrelated machine scheduling problem, where the price of anarchy equals 4. Interestingly, as Nash equilibria coincide with shortest processing time first (SPT) schedules, the same bounds hold for SPT schedules. Thereby, our work also fills a gap in the literature

    Exact Price of Anarchy for Weighted Congestion Games with Two Players

    Get PDF
    This paper gives a complete analysis of worst-case equilibria for various versions of weighted congestion games with two players and affine cost functions. The results are exact price of anarchy bounds which are parametric in the weights of the two players, and establish exactly how the primitives of the game enter into the quality of equilibria. Interestingly, some of the worst-cases are attained when the players' weights only differ slightly. Our findings also show that sequential play improves the price of anarchy in all cases, however, this effect vanishes with an increasing difference in the players' weights. Methodologically, we obtain exact price of anarchy bounds by a duality based proof mechanism, based on a compact linear programming formulation that computes worst-case instances. This mechanism yields duality-based optimality certificates which can eventually be turned into purely algebraic proofs.Comment: 17 pages, 9 figures, 4 table

    Efficiency analysis of load balancing games with and without activation costs

    Get PDF
    In this paper, we study two models of resource allocation games: the classical load-balancing game and its new variant involving resource activation costs. The resources we consider are identical and the social costs of the games are utilitarian, which are the average of all individual players' costs. Using the social costs we assess the quality of pure Nash equilibria in terms of the price of anarchy (PoA) and the price of stability (PoS). For each game problem, we identify suitable problem parameters and provide a parametric bound on the PoA and the PoS. In the case of the load-balancing game, the parametric bounds we provide are sharp and asymptotically tight

    Non-clairvoyant Scheduling Games

    Full text link
    In a scheduling game, each player owns a job and chooses a machine to execute it. While the social cost is the maximal load over all machines (makespan), the cost (disutility) of each player is the completion time of its own job. In the game, players may follow selfish strategies to optimize their cost and therefore their behaviors do not necessarily lead the game to an equilibrium. Even in the case there is an equilibrium, its makespan might be much larger than the social optimum, and this inefficiency is measured by the price of anarchy -- the worst ratio between the makespan of an equilibrium and the optimum. Coordination mechanisms aim to reduce the price of anarchy by designing scheduling policies that specify how jobs assigned to a same machine are to be scheduled. Typically these policies define the schedule according to the processing times as announced by the jobs. One could wonder if there are policies that do not require this knowledge, and still provide a good price of anarchy. This would make the processing times be private information and avoid the problem of truthfulness. In this paper we study these so-called non-clairvoyant policies. In particular, we study the RANDOM policy that schedules the jobs in a random order without preemption, and the EQUI policy that schedules the jobs in parallel using time-multiplexing, assigning each job an equal fraction of CPU time
    corecore