817 research outputs found

    Incentive Compatible Mechanisms without Money

    Get PDF
    Mechanism design arises in environments where a set of strategic agents should achieve a common goal, but this goal may be affected by the selfish behavior of the agents. A popular tool to mitigate this impact is incentive compatibility, the design of mechanisms in such a way that strategic agents are motivated to act honestly. Many times this can be done using payments: monetary transactions can be implemented by the mechanism, which provide the agents with the right incentives to reveal their true colors. However, there are cases where such payments are not applicable for various reasons, moral, legal, or practical. In this thesis, we focus on problems where payments are prohibited, and we propose incentive compatible solutions, respecting this constraint. We concentrate on two main problems: the problem of impartial selection and the problem of truthful budget aggregation. In both problems, strategic agents need to come up with a joint decision, but their selfish behavior may lead them to highly sub-optimal solutions. Our goal is to design mechanisms providing the agents with proper incentives to act sincerely. Unfortunately, we are only able to achieve this by sacrificing the quality of the solution, in the sense that the solutions we get are not as good as the solutions we could get in an environment where the agents would not be strategic. Therefore, we compare our mechanisms with ideal, non-strategic outcomes, providing worst-case approximation guarantees. The first problem we confront, impartial selection, involves the selection of an influential member of a community of individuals. This community can be described by a directed graph, where the nodes represent the individuals and the directed edges represent nominations. The task is given this graph to select the node with the highest number of nominations. However, the community members are selfish agents; hence, their reported nominations are not trusted, and this seemingly trivial task is now challenging. Impartiality, a property requiring no single node to influence her selection probability, provides proper incentives to the agents to act honestly. Recent progress in the literature has identified impartial selection rules with optimal approximation ratios, i.e., the ratio between the maximum in-degree and the in-degree of the selected node. However, it was noted that worst-case instances are graphs with small in-degrees. Motivated by this fact, we deviate from the trend and propose the study of additive approximation: the difference between the highest number of nominations and the number of nominations of the selected member, as an alternative measure of the quality of impartial selection mechanisms. The first part of this thesis is concerned with the design of impartial selection mechanisms with small additive approximation guarantees. On the positive side, we were able to design two randomized impartial selection mechanisms with sub-linear, on the community size, additive approximation guarantees for two well-studied models in the literature. We complement our positive results by providing negative results for various cases. We continue our investigation of the impartial selection problem from another direction. Getting our inspiration from the design of auction and posted pricing mechanisms with good approximation guarantees for welfare and profit maximization, we follow up our work with an enhanced model, where we study the extent to which prior information on voters' preferences could be helpful in the design of efficient deterministic impartial selection mechanisms with good additive approximation guarantees. First, we define a hierarchy of three models of prior information, which we call the opinion poll, the a priori popularity, and the uniform models. Then, we analyze the performance of a natural mechanism that we call Approval Voting with Default and show that it achieves a sub-linear additive guarantee for opinion poll and a polylogarithmic for a priori popularity inputs. We consider the polylogarithmic bound as the leading technical contribution of this part. Finally, we complement this last result by showing that our analysis is close to tight. We then turn our attention to the truthful budget aggregation problem. In this problem, strategic voters wish to split a divisible budget among different projects by aggregating their proposals into a single budget division. Unfortunately, it is well-known that the straightforward rule that divides the budget proportionally is susceptible to manipulation. While sophisticated incentive compatible mechanisms have been proposed in the literature, their outcomes are often far from fair. To capture this loss of fairness imposed by the need for truthfulness, we propose a quantitative framework that evaluates a budget aggregation mechanism according to its worst-case distance from the proportional allocation. We study this measure in the recently proposed class of incentive compatible mechanisms, called the moving phantom}mechanisms, and we provide approximation guarantees. For two projects, we show that the well-known Uniform Phantom mechanism is optimal among all truthful mechanisms. For three projects, we propose the proportional, Piecewise Uniform mechanism that is optimal among all moving phantom mechanisms. Finally, we provide impossibility results regarding the approximability of moving phantom mechanisms, and budget aggregation mechanisms, in general

    Impartial selection with prior information

    Get PDF
    We study the problem of {\em impartial selection}, a topic that lies at the intersection of computational social choice and mechanism design. The goal is to select the most popular individual among a set of community members. The input can be modeled as a directed graph, where each node represents an individual, and a directed edge indicates nomination or approval of a community member to another. An {\em impartial mechanism} is robust to potential selfish behavior of the individuals and provides appropriate incentives to voters to report their true preferences by ensuring that the chance of a node to become a winner does not depend on its outgoing edges. The goal is to design impartial mechanisms that select a node with an in-degree that is as close as possible to the highest in-degree. We measure the efficiency of such a mechanism by the difference of these in-degrees, known as its {\em additive} approximation. In particular, we study the extent to which prior information on voters' preferences could be useful in the design of efficient deterministic impartial selection mechanisms with good additive approximation guarantees. We consider three models of prior information, which we call the {\em opinion poll}, the {\em a prior popularity}, and the {\em uniform} model. We analyze the performance of a natural selection mechanism that we call {\em approval voting with default} (AVD) and show that it achieves a O(nlnn)O(\sqrt{n\ln{n}}) additive guarantee for opinion poll and a O(ln2n)O(\ln^2n) for a priori popularity inputs, where nn is the number of individuals. We consider this polylogarithmic bound as our main technical contribution. We complement this last result by showing that our analysis is close to tight, showing an Ω(lnn)\Omega(\ln{n}) lower bound. This holds in the uniform model, which is the simplest among the three models

    Incentive-Compatible Selection for One or Two Influentials

    Full text link
    Selecting influentials in networks against strategic manipulations has attracted many researchers' attention and it also has many practical applications. Here, we aim to select one or two influentials in terms of progeny (the influential power) and prevent agents from manipulating their edges (incentive compatibility). The existing studies mostly focused on selecting a single influential for this setting. Zhang et al. [2021] studied the problem of selecting one agent and proved an upper bound of 1/(1+ln2) to approximate the optimal selection. In this paper, we first design a mechanism to actually reach the bound. Then, we move this forward to choosing two agents and propose a mechanism to achieve an approximation ratio of (3+ln2)/(4(1+ln2)) (approx. 0.54).Comment: To Appear on IJCAI 202

    Deterministic Impartial Selection with Weights

    Full text link
    In the impartial selection problem, a subset of agents up to a fixed size kk among a group of nn is to be chosen based on votes cast by the agents themselves. A selection mechanism is impartial if no agent can influence its own chance of being selected by changing its vote. It is α\alpha-optimal if, for every instance, the ratio between the votes received by the selected subset is at least a fraction of α\alpha of the votes received by the subset of size kk with the highest number of votes. We study deterministic impartial mechanisms in a more general setting with arbitrarily weighted votes and provide the first approximation guarantee, roughly 1/2n/k1/\lceil 2n/k\rceil. When the number of agents to select is large enough compared to the total number of agents, this yields an improvement on the previously best known approximation ratio of 1/k1/k for the unweighted setting. We further show that our mechanism can be adapted to the impartial assignment problem, in which multiple sets of up to kk agents are to be selected, with a loss in the approximation ratio of 1/21/2.Comment: To appear in the Proceedings of the 19th Conference on Web and Internet Economics (WINE 2023

    Strategic Behavior is Bliss: Iterative Voting Improves Social Welfare

    Full text link
    Recent work in iterative voting has defined the additive dynamic price of anarchy (ADPoA) as the difference in social welfare between the truthful and worst-case equilibrium profiles resulting from repeated strategic manipulations. While iterative plurality has been shown to only return alternatives with at most one less initial votes than the truthful winner, it is less understood how agents' welfare changes in equilibrium. To this end, we differentiate agents' utility from their manipulation mechanism and determine iterative plurality's ADPoA in the worst- and average-cases. We first prove that the worst-case ADPoA is linear in the number of agents. To overcome this negative result, we study the average-case ADPoA and prove that equilibrium winners have a constant order welfare advantage over the truthful winner in expectation. Our positive results illustrate the prospect for social welfare to increase due to strategic manipulation.Comment: 21 pages, 5 figures, in NeurIPS 202
    corecore