1,634 research outputs found

    Computing Equilibria in Anonymous Games

    Full text link
    We present efficient approximation algorithms for finding Nash equilibria in anonymous games, that is, games in which the players utilities, though different, do not differentiate between other players. Our results pertain to such games with many players but few strategies. We show that any such game has an approximate pure Nash equilibrium, computable in polynomial time, with approximation O(s^2 L), where s is the number of strategies and L is the Lipschitz constant of the utilities. Finally, we show that there is a PTAS for finding an epsilo

    Sparse Covers for Sums of Indicators

    Get PDF
    For all n,ϵ>0n, \epsilon >0, we show that the set of Poisson Binomial distributions on nn variables admits a proper ϵ\epsilon-cover in total variation distance of size n2+n(1/ϵ)O(log2(1/ϵ))n^2+n \cdot (1/\epsilon)^{O(\log^2 (1/\epsilon))}, which can also be computed in polynomial time. We discuss the implications of our construction for approximation algorithms and the computation of approximate Nash equilibria in anonymous games.Comment: PTRF, to appea

    Discretized Multinomial Distributions and Nash Equilibria in Anonymous Games

    Full text link
    We show that there is a polynomial-time approximation scheme for computing Nash equilibria in anonymous games with any fixed number of strategies (a very broad and important class of games), extending the two-strategy result of Daskalakis and Papadimitriou 2007. The approximation guarantee follows from a probabilistic result of more general interest: The distribution of the sum of n independent unit vectors with values ranging over {e1, e2, ...,ek}, where ei is the unit vector along dimension i of the k-dimensional Euclidean space, can be approximated by the distribution of the sum of another set of independent unit vectors whose probabilities of obtaining each value are multiples of 1/z for some integer z, and so that the variational distance of the two distributions is at most eps, where eps is bounded by an inverse polynomial in z and a function of k, but with no dependence on n. Our probabilistic result specifies the construction of a surprisingly sparse eps-cover -- under the total variation distance -- of the set of distributions of sums of independent unit vectors, which is of interest on its own right.Comment: In the 49th Annual IEEE Symposium on Foundations of Computer Science, FOCS 200

    On the Complexity of Dynamic Mechanism Design

    Full text link
    We introduce a dynamic mechanism design problem in which the designer wants to offer for sale an item to an agent, and another item to the same agent at some point in the future. The agent's joint distribution of valuations for the two items is known, and the agent knows the valuation for the current item (but not for the one in the future). The designer seeks to maximize expected revenue, and the auction must be deterministic, truthful, and ex post individually rational. The optimum mechanism involves a protocol whereby the seller elicits the buyer's current valuation, and based on the bid makes two take-it-or-leave-it offers, one for now and one for the future. We show that finding the optimum deterministic mechanism in this situation - arguably the simplest meaningful dynamic mechanism design problem imaginable - is NP-hard. We also prove several positive results, among them a polynomial linear programming-based algorithm for the optimum randomized auction (even for many bidders and periods), and we show strong separations in revenue between non-adaptive, adaptive, and randomized auctions, even when the valuations in the two periods are uncorrelated. Finally, for the same problem in an environment in which contracts cannot be enforced, and thus perfection of equilibrium is necessary, we show that the optimum randomized mechanism requires multiple rounds of cheap talk-like interactions

    Cycles in adversarial regularized learning

    Get PDF
    Regularized learning is a fundamental technique in online optimization, machine learning and many other fields of computer science. A natural question that arises in these settings is how regularized learning algorithms behave when faced against each other. We study a natural formulation of this problem by coupling regularized learning dynamics in zero-sum games. We show that the system's behavior is Poincar\'e recurrent, implying that almost every trajectory revisits any (arbitrarily small) neighborhood of its starting point infinitely often. This cycling behavior is robust to the agents' choice of regularization mechanism (each agent could be using a different regularizer), to positive-affine transformations of the agents' utilities, and it also persists in the case of networked competition, i.e., for zero-sum polymatrix games.Comment: 22 pages, 4 figure
    corecore