25 research outputs found

    The Pareto Frontier for Random Mechanisms

    Full text link
    We study the trade-offs between strategyproofness and other desiderata, such as efficiency or fairness, that often arise in the design of random ordinal mechanisms. We use approximate strategyproofness to define manipulability, a measure to quantify the incentive properties of non-strategyproof mechanisms, and we introduce the deficit, a measure to quantify the performance of mechanisms with respect to another desideratum. When this desideratum is incompatible with strategyproofness, mechanisms that trade off manipulability and deficit optimally form the Pareto frontier. Our main contribution is a structural characterization of this Pareto frontier, and we present algorithms that exploit this structure to compute it. To illustrate its shape, we apply our results for two different desiderata, namely Plurality and Veto scoring, in settings with 3 alternatives and up to 18 agents.Comment: Working Pape

    Privacy and Truthful Equilibrium Selection for Aggregative Games

    Full text link
    We study a very general class of games --- multi-dimensional aggregative games --- which in particular generalize both anonymous games and weighted congestion games. For any such game that is also large, we solve the equilibrium selection problem in a strong sense. In particular, we give an efficient weak mediator: a mechanism which has only the power to listen to reported types and provide non-binding suggested actions, such that (a) it is an asymptotic Nash equilibrium for every player to truthfully report their type to the mediator, and then follow its suggested action; and (b) that when players do so, they end up coordinating on a particular asymptotic pure strategy Nash equilibrium of the induced complete information game. In fact, truthful reporting is an ex-post Nash equilibrium of the mediated game, so our solution applies even in settings of incomplete information, and even when player types are arbitrary or worst-case (i.e. not drawn from a common prior). We achieve this by giving an efficient differentially private algorithm for computing a Nash equilibrium in such games. The rates of convergence to equilibrium in all of our results are inverse polynomial in the number of players nn. We also apply our main results to a multi-dimensional market game. Our results can be viewed as giving, for a rich class of games, a more robust version of the Revelation Principle, in that we work with weaker informational assumptions (no common prior), yet provide a stronger solution concept (ex-post Nash versus Bayes Nash equilibrium). In comparison to previous work, our main conceptual contribution is showing that weak mediators are a game theoretic object that exist in a wide variety of games -- previously, they were only known to exist in traffic routing games

    On the Efficiency of the Walrasian Mechanism

    Full text link
    Central results in economics guarantee the existence of efficient equilibria for various classes of markets. An underlying assumption in early work is that agents are price-takers, i.e., agents honestly report their true demand in response to prices. A line of research in economics, initiated by Hurwicz (1972), is devoted to understanding how such markets perform when agents are strategic about their demands. This is captured by the \emph{Walrasian Mechanism} that proceeds by collecting reported demands, finding clearing prices in the \emph{reported} market via an ascending price t\^{a}tonnement procedure, and returns the resulting allocation. Similar mechanisms are used, for example, in the daily opening of the New York Stock Exchange and the call market for copper and gold in London. In practice, it is commonly observed that agents in such markets reduce their demand leading to behaviors resembling bargaining and to inefficient outcomes. We ask how inefficient the equilibria can be. Our main result is that the welfare of every pure Nash equilibrium of the Walrasian mechanism is at least one quarter of the optimal welfare, when players have gross substitute valuations and do not overbid. Previous analysis of the Walrasian mechanism have resorted to large market assumptions to show convergence to efficiency in the limit. Our result shows that approximate efficiency is guaranteed regardless of the size of the market

    The Core of the Participatory Budgeting Problem

    Full text link
    In participatory budgeting, communities collectively decide on the allocation of public tax dollars for local public projects. In this work, we consider the question of fairly aggregating the preferences of community members to determine an allocation of funds to projects. This problem is different from standard fair resource allocation because of public goods: The allocated goods benefit all users simultaneously. Fairness is crucial in participatory decision making, since generating equitable outcomes is an important goal of these processes. We argue that the classic game theoretic notion of core captures fairness in the setting. To compute the core, we first develop a novel characterization of a public goods market equilibrium called the Lindahl equilibrium, which is always a core solution. We then provide the first (to our knowledge) polynomial time algorithm for computing such an equilibrium for a broad set of utility functions; our algorithm also generalizes (in a non-trivial way) the well-known concept of proportional fairness. We use our theoretical insights to perform experiments on real participatory budgeting voting data. We empirically show that the core can be efficiently computed for utility functions that naturally model our practical setting, and examine the relation of the core with the familiar welfare objective. Finally, we address concerns of incentives and mechanism design by developing a randomized approximately dominant-strategy truthful mechanism building on the exponential mechanism from differential privacy

    PeerNomination : a novel peer selection algorithm to handle strategic and noisy assessments

    Get PDF
    In peer selection a group of agents must choose a subset of themselves, as winners for, e.g., peer-reviewed grants or prizes. We take a Condorcet view of this aggregation problem, assuming that there is an objective ground-truth ordering over the agents. We study agents that have a noisy perception of this ground truth and give assessments that, even when truthful, can be inaccurate. Our goal is to select the best set of agents according to the underlying ground truth by looking at the potentially unreliable assessments of the peers. Besides being potentially unreliable, we also allow agents to be self-interested, attempting to influence the outcome of the decision in their favour. Hence, we are focused on tackling the problem of impartial (or strategyproof) peer selection -- how do we prevent agents from manipulating their reviews while still selecting the most deserving individuals, all in the presence of noisy evaluations? We propose a novel impartial peer selection algorithm, PeerNomination, that aims to fulfil the above desiderata. We provide a comprehensive theoretical analysis of the recall of PeerNomination and prove various properties, including impartiality and monotonicity. We also provide empirical results based on computer simulations to show its effectiveness compared to the state-of-the-art impartial peer selection algorithms. We then investigate the robustness of PeerNomination to various levels of noise in the reviews. In order to maintain good performance under such conditions, we extend PeerNomination by using weights for reviewers which, informally, capture some notion of reliability of the reviewer. We show, theoretically, that the new algorithm preserves strategyproofness and, empirically, that the weights help identify the noisy reviewers and hence to increase selection performance

    Learning and Robustness With Applications To Mechanism Design

    Get PDF
    The design of economic mechanisms, especially auctions, is an increasingly important part of the modern economy. A particularly important property for a mechanism is strategyproofness -- the mechanism must be robust to strategic manipulations so that the participants in the mechanism have no incentive to lie. Yet in the important case when the mechanism designer's goal is to maximize their own revenue, the design of optimal strategyproof mechanisms has proved immensely difficult, with very little progress after decades of research. Recently, to escape this impasse, a number of works have parameterized auction mechanisms as deep neural networks, and used gradient descent to successfully learn approximately optimal and approximately strategyproof mechanisms. We present several improvements on these techniques. When an auction mechanism is represented as a neural network mapping bids from outcomes, strategyproofness can be thought of as a type of adversarial robustness. Making this connection explicit, we design a modified architecture for learning auctions which is amenable to integer-programming-based certification techniques from the adversarial robustness literature. Existing baselines are empirically strategyproof, but with no way to be certain how strong that guarantee really is. By contrast, we are able to provide perfectly tight bounds on the degree to which strategyproofness is violated at any given point. Existing neural networks for auctions learn to maximize revenue subject to strategyproofness. Yet in many auctions, fairness is also an important concern -- in particular, fairness with respect to the items in the auction, which may represent, for instance, ad impressions for different protected demographic groups. With our new architecture, ProportionNet, we impose fairness constraints in addition to the strategyproofness constraints, and find approximately fair, approximately optimal mechanisms which outperform baselines. With PreferenceNet, we extend this approach to notions of fairness that are learned from possibly vague human preferences. Existing network architectures can represent additive and unit-demand auctions, but are unable to imposing more complex exactly-k constraints on the allocations made to the bidders. By using the Sinkhorn algorithm to add differentiable matching constraints, we produce a network which can represent valid allocations in such settings. Finally, we present a new auction architecture which is a differentiable version of affine maximizer auctions, modified to offer lotteries in order to potentially increase revenue. This architecture is always perfectly strategyproof (avoiding the Lagrangian-based constrained optimization of RegretNet) -- to achieve this goal, however, we need to accept that we cannot in general represent the optimal auction
    corecore