1,674 research outputs found

    A "Quantal Regret" Method for Structural Econometrics in Repeated Games

    Full text link
    We suggest a general method for inferring players' values from their actions in repeated games. The method extends and improves upon the recent suggestion of (Nekipelov et al., EC 2015) and is based on the assumption that players are more likely to exhibit sequences of actions that have lower regret. We evaluate this "quantal regret" method on two different datasets from experiments of repeated games with controlled player values: those of (Selten and Chmura, AER 2008) on a variety of two-player 2x2 games and our own experiment on ad-auctions (Noti et al., WWW 2014). We find that the quantal regret method is consistently and significantly more precise than either "classic" econometric methods that are based on Nash equilibria, or the "min-regret" method of (Nekipelov et al., EC 2015)

    On Algorithmic Statistics for space-bounded algorithms

    Full text link
    Algorithmic statistics studies explanations of observed data that are good in the algorithmic sense: an explanation should be simple i.e. should have small Kolmogorov complexity and capture all the algorithmically discoverable regularities in the data. However this idea can not be used in practice because Kolmogorov complexity is not computable. In this paper we develop algorithmic statistics using space-bounded Kolmogorov complexity. We prove an analogue of one of the main result of `classic' algorithmic statistics (about the connection between optimality and randomness deficiences). The main tool of our proof is the Nisan-Wigderson generator.Comment: accepted to CSR 2017 conferenc

    Weak Parity

    Get PDF
    We study the query complexity of Weak Parity: the problem of computing the parity of an n-bit input string, where one only has to succeed on a 1/2+eps fraction of input strings, but must do so with high probability on those inputs where one does succeed. It is well-known that n randomized queries and n/2 quantum queries are needed to compute parity on all inputs. But surprisingly, we give a randomized algorithm for Weak Parity that makes only O(n/log^0.246(1/eps)) queries, as well as a quantum algorithm that makes only O(n/sqrt(log(1/eps))) queries. We also prove a lower bound of Omega(n/log(1/eps)) in both cases; and using extremal combinatorics, prove lower bounds of Omega(log n) in the randomized case and Omega(sqrt(log n)) in the quantum case for any eps>0. We show that improving our lower bounds is intimately related to two longstanding open problems about Boolean functions: the Sensitivity Conjecture, and the relationships between query complexity and polynomial degree.Comment: 18 page

    Online Ascending Auctions for Gradually Expiring Items

    Get PDF
    In this paper we consider online auction mechanisms for the allocation of M items that are identical to each other except for the fact that they have different expiration times, and each item must be allocated before it expires. Players arrive at different times, and wish to buy one item before their deadline. The main difficulty is that players act "selfishly" and may mis-report their values, deadlines, or arrival times. We begin by showing that the usual notion of truthfulness (where players follow a single dominant strategy) cannot be used in this case, since any (deterministic) truthful auction cannot obtain better than an M-approximation of the social welfare. Therefore, instead of designing auctions in which players should follow a single strategy, we design two auctions that perform well under a wide class of selfish, "semi-myopic", strategies. For every combination of such strategies, the auction is associated with a different algorithm, and so we have a family of "semi-myopic" algorithms. We show that any algorithm in this family obtains a 3-approximation, and by this conclude that our auctions will perform well under any choice of such semi-myopic behaviors. We next turn to provide a game-theoretic justification for acting in such a semi-myopic way. We suggest a new notion of "Set-Nash" equilibrium, where we cannot pin-point a single best-response strategy, but rather only a set of possible best-response strategies. We show that our auctions have a Set-Nash equilibrium which is all semi-myopic, hence guarantees a 3-approximation. We believe that this notion is of independent interest

    Truthful approximation mechanisms for restricted combinatorial auctions

    Get PDF
    When attempting to design a truthful mechanism for a computationally hard problem such as combinatorial auctions, one is faced with the problem that most efficiently computable heuristics can not be embedded in any truthful mechanism (e.g. VCG-like payment rules will not ensure truthfulness). We develop a set of techniques that allow constructing efficiently computable truthful mechanisms for combinatorial auctions in the special case where each bidder desires a specific known subset of items and only the valuation is unknown by the mechanism (the single parameter case). For this case we extend the work of Lehmann, O'Callaghan, and Shoham, who presented greedy heuristics. We show how to use If-Then-Else constructs, perform a partial search, and use the LP relaxation. We apply these techniques for several canonical types of combinatorial auctions, obtaining truthful mechanisms with provable approximation ratios

    The Query Complexity of Correlated Equilibria

    Full text link
    We consider the complexity of finding a correlated equilibrium of an nn-player game in a model that allows the algorithm to make queries on players' payoffs at pure strategy profiles. Randomized regret-based dynamics are known to yield an approximate correlated equilibrium efficiently, namely, in time that is polynomial in the number of players nn. Here we show that both randomization and approximation are necessary: no efficient deterministic algorithm can reach even an approximate correlated equilibrium, and no efficient randomized algorithm can reach an exact correlated equilibrium. The results are obtained by bounding from below the number of payoff queries that are needed

    Communication Complexity of Cake Cutting

    Full text link
    We study classic cake-cutting problems, but in discrete models rather than using infinite-precision real values, specifically, focusing on their communication complexity. Using general discrete simulations of classical infinite-precision protocols (Robertson-Webb and moving-knife), we roughly partition the various fair-allocation problems into 3 classes: "easy" (constant number of rounds of logarithmic many bits), "medium" (poly-logarithmic total communication), and "hard". Our main technical result concerns two of the "medium" problems (perfect allocation for 2 players and equitable allocation for any number of players) which we prove are not in the "easy" class. Our main open problem is to separate the "hard" from the "medium" classes.Comment: Added efficient communication protocol for the monotone crossing proble
    • …
    corecore