8 research outputs found

    Mechanisms that play a game, not toss a coin

    Full text link
    Randomized mechanisms can have good normative properties compared to their deterministic counterparts. However, randomized mechanisms are problematic in several ways such as in their verifiability. We propose here to derandomize such mechanisms by having agents play a game instead of tossing a coin. The game is designed so an agent's best action is to play randomly, and this play then injects ``randomness'' into the mechanism. This derandomization retains many of the good normative properties of the original randomized mechanism but gives a mechanism that is deterministic and easy, for instance, to audit. We consider three related methods to derandomize randomized mechanism in six different domains: voting, facility location, task allocation, school choice, peer selection, and resource allocation. We propose a number of novel derandomized mechanisms for these six domains with good normative properties. Each mechanism has a mixed Nash equilibrium in which agents play a modular arithmetic game with an uniform mixed strategy. In all but one mixed Nash equilibrium, agents report their preferences over the original problem sincerely. The derandomized methods are thus ``quasi-strategy proof''. In one domain, we additionally show that a new and desirable normative property emerges as a result of derandomization

    Deterministic Impartial Selection with Weights

    Full text link
    In the impartial selection problem, a subset of agents up to a fixed size kk among a group of nn is to be chosen based on votes cast by the agents themselves. A selection mechanism is impartial if no agent can influence its own chance of being selected by changing its vote. It is α\alpha-optimal if, for every instance, the ratio between the votes received by the selected subset is at least a fraction of α\alpha of the votes received by the subset of size kk with the highest number of votes. We study deterministic impartial mechanisms in a more general setting with arbitrarily weighted votes and provide the first approximation guarantee, roughly 1/2n/k1/\lceil 2n/k\rceil. When the number of agents to select is large enough compared to the total number of agents, this yields an improvement on the previously best known approximation ratio of 1/k1/k for the unweighted setting. We further show that our mechanism can be adapted to the impartial assignment problem, in which multiple sets of up to kk agents are to be selected, with a loss in the approximation ratio of 1/21/2.Comment: To appear in the Proceedings of the 19th Conference on Web and Internet Economics (WINE 2023

    PeerNomination : a novel peer selection algorithm to handle strategic and noisy assessments

    Get PDF
    In peer selection a group of agents must choose a subset of themselves, as winners for, e.g., peer-reviewed grants or prizes. We take a Condorcet view of this aggregation problem, assuming that there is an objective ground-truth ordering over the agents. We study agents that have a noisy perception of this ground truth and give assessments that, even when truthful, can be inaccurate. Our goal is to select the best set of agents according to the underlying ground truth by looking at the potentially unreliable assessments of the peers. Besides being potentially unreliable, we also allow agents to be self-interested, attempting to influence the outcome of the decision in their favour. Hence, we are focused on tackling the problem of impartial (or strategyproof) peer selection -- how do we prevent agents from manipulating their reviews while still selecting the most deserving individuals, all in the presence of noisy evaluations? We propose a novel impartial peer selection algorithm, PeerNomination, that aims to fulfil the above desiderata. We provide a comprehensive theoretical analysis of the recall of PeerNomination and prove various properties, including impartiality and monotonicity. We also provide empirical results based on computer simulations to show its effectiveness compared to the state-of-the-art impartial peer selection algorithms. We then investigate the robustness of PeerNomination to various levels of noise in the reviews. In order to maintain good performance under such conditions, we extend PeerNomination by using weights for reviewers which, informally, capture some notion of reliability of the reviewer. We show, theoretically, that the new algorithm preserves strategyproofness and, empirically, that the weights help identify the noisy reviewers and hence to increase selection performance

    Impartial selection with prior information

    Get PDF
    We study the problem of {\em impartial selection}, a topic that lies at the intersection of computational social choice and mechanism design. The goal is to select the most popular individual among a set of community members. The input can be modeled as a directed graph, where each node represents an individual, and a directed edge indicates nomination or approval of a community member to another. An {\em impartial mechanism} is robust to potential selfish behavior of the individuals and provides appropriate incentives to voters to report their true preferences by ensuring that the chance of a node to become a winner does not depend on its outgoing edges. The goal is to design impartial mechanisms that select a node with an in-degree that is as close as possible to the highest in-degree. We measure the efficiency of such a mechanism by the difference of these in-degrees, known as its {\em additive} approximation. In particular, we study the extent to which prior information on voters' preferences could be useful in the design of efficient deterministic impartial selection mechanisms with good additive approximation guarantees. We consider three models of prior information, which we call the {\em opinion poll}, the {\em a prior popularity}, and the {\em uniform} model. We analyze the performance of a natural selection mechanism that we call {\em approval voting with default} (AVD) and show that it achieves a O(nlnn)O(\sqrt{n\ln{n}}) additive guarantee for opinion poll and a O(ln2n)O(\ln^2n) for a priori popularity inputs, where nn is the number of individuals. We consider this polylogarithmic bound as our main technical contribution. We complement this last result by showing that our analysis is close to tight, showing an Ω(lnn)\Omega(\ln{n}) lower bound. This holds in the uniform model, which is the simplest among the three models

    Incentive Compatible Mechanisms without Money

    Get PDF
    Mechanism design arises in environments where a set of strategic agents should achieve a common goal, but this goal may be affected by the selfish behavior of the agents. A popular tool to mitigate this impact is incentive compatibility, the design of mechanisms in such a way that strategic agents are motivated to act honestly. Many times this can be done using payments: monetary transactions can be implemented by the mechanism, which provide the agents with the right incentives to reveal their true colors. However, there are cases where such payments are not applicable for various reasons, moral, legal, or practical. In this thesis, we focus on problems where payments are prohibited, and we propose incentive compatible solutions, respecting this constraint. We concentrate on two main problems: the problem of impartial selection and the problem of truthful budget aggregation. In both problems, strategic agents need to come up with a joint decision, but their selfish behavior may lead them to highly sub-optimal solutions. Our goal is to design mechanisms providing the agents with proper incentives to act sincerely. Unfortunately, we are only able to achieve this by sacrificing the quality of the solution, in the sense that the solutions we get are not as good as the solutions we could get in an environment where the agents would not be strategic. Therefore, we compare our mechanisms with ideal, non-strategic outcomes, providing worst-case approximation guarantees. The first problem we confront, impartial selection, involves the selection of an influential member of a community of individuals. This community can be described by a directed graph, where the nodes represent the individuals and the directed edges represent nominations. The task is given this graph to select the node with the highest number of nominations. However, the community members are selfish agents; hence, their reported nominations are not trusted, and this seemingly trivial task is now challenging. Impartiality, a property requiring no single node to influence her selection probability, provides proper incentives to the agents to act honestly. Recent progress in the literature has identified impartial selection rules with optimal approximation ratios, i.e., the ratio between the maximum in-degree and the in-degree of the selected node. However, it was noted that worst-case instances are graphs with small in-degrees. Motivated by this fact, we deviate from the trend and propose the study of additive approximation: the difference between the highest number of nominations and the number of nominations of the selected member, as an alternative measure of the quality of impartial selection mechanisms. The first part of this thesis is concerned with the design of impartial selection mechanisms with small additive approximation guarantees. On the positive side, we were able to design two randomized impartial selection mechanisms with sub-linear, on the community size, additive approximation guarantees for two well-studied models in the literature. We complement our positive results by providing negative results for various cases. We continue our investigation of the impartial selection problem from another direction. Getting our inspiration from the design of auction and posted pricing mechanisms with good approximation guarantees for welfare and profit maximization, we follow up our work with an enhanced model, where we study the extent to which prior information on voters' preferences could be helpful in the design of efficient deterministic impartial selection mechanisms with good additive approximation guarantees. First, we define a hierarchy of three models of prior information, which we call the opinion poll, the a priori popularity, and the uniform models. Then, we analyze the performance of a natural mechanism that we call Approval Voting with Default and show that it achieves a sub-linear additive guarantee for opinion poll and a polylogarithmic for a priori popularity inputs. We consider the polylogarithmic bound as the leading technical contribution of this part. Finally, we complement this last result by showing that our analysis is close to tight. We then turn our attention to the truthful budget aggregation problem. In this problem, strategic voters wish to split a divisible budget among different projects by aggregating their proposals into a single budget division. Unfortunately, it is well-known that the straightforward rule that divides the budget proportionally is susceptible to manipulation. While sophisticated incentive compatible mechanisms have been proposed in the literature, their outcomes are often far from fair. To capture this loss of fairness imposed by the need for truthfulness, we propose a quantitative framework that evaluates a budget aggregation mechanism according to its worst-case distance from the proportional allocation. We study this measure in the recently proposed class of incentive compatible mechanisms, called the moving phantom}mechanisms, and we provide approximation guarantees. For two projects, we show that the well-known Uniform Phantom mechanism is optimal among all truthful mechanisms. For three projects, we propose the proportional, Piecewise Uniform mechanism that is optimal among all moving phantom mechanisms. Finally, we provide impossibility results regarding the approximability of moving phantom mechanisms, and budget aggregation mechanisms, in general
    corecore