6 research outputs found

    Improved Metric Distortion for Deterministic Social Choice Rules

    Full text link
    In this paper, we study the metric distortion of deterministic social choice rules that choose a winning candidate from a set of candidates based on voter preferences. Voters and candidates are located in an underlying metric space. A voter has cost equal to her distance to the winning candidate. Ordinal social choice rules only have access to the ordinal preferences of the voters that are assumed to be consistent with the metric distances. Our goal is to design an ordinal social choice rule with minimum distortion, which is the worst-case ratio, over all consistent metrics, between the social cost of the rule and that of the optimal omniscient rule with knowledge of the underlying metric space. The distortion of the best deterministic social choice rule was known to be between 33 and 55. It had been conjectured that any rule that only looks at the weighted tournament graph on the candidates cannot have distortion better than 55. In our paper, we disprove it by presenting a weighted tournament rule with distortion of 4.2364.236. We design this rule by generalizing the classic notion of uncovered sets, and further show that this class of rules cannot have distortion better than 4.2364.236. We then propose a new voting rule, via an alternative generalization of uncovered sets. We show that if a candidate satisfying the criterion of this voting rule exists, then choosing such a candidate yields a distortion bound of 33, matching the lower bound. We present a combinatorial conjecture that implies distortion of 33, and verify it for small numbers of candidates and voters by computer experiments. Using our framework, we also show that selecting any candidate guarantees distortion of at most 33 when the weighted tournament graph is cyclically symmetric.Comment: EC 201

    Making Decisions with Incomplete and Inaccurate Information

    Get PDF
    From assigning students to public schools to arriving at divorce settlements, there are many settings where preferences expressed by a set of stakeholders are used to make decisions that affect them. Due to its numerous applications, and thanks to the range of questions involved, such settings have received considerable attention in fields ranging from philosophy to political science, and particularly from economics and, more recently, computer science. Although there exists a significant body of literature studying such settings, much of the work in this space make the assumption that stakeholders provide complete and accurate preference information to such decision-making procedures. However, due to, say, the high cognitive burden involved or privacy concerns, this may not always be feasible. The goal of this thesis is to explicitly address these limitations. We do so by building on previous work that looks at working with incomplete information, and by introducing solution concepts and notions that support the design of algorithms and mechanisms that can handle incomplete and inaccurate information in different settings. We present our results in two parts. In Part I we look at decision-making in the presence of incomplete information. We focus on two broad themes, both from the perspective of an algorithm or mechanism designer. Informally, the first one studies the following question: Given incomplete preferences, how does one design algorithms that are `robust', i.e., ones that produce solutions that are ``good'' with respect to the underlying complete preferences? We look at this question in context of two well-studied problems, namely, i) (a version of) the two-sided matching problem and ii) (a version of) the facility location problem, and show how one can design approximately-robust algorithms in such settings. Following this, we look at the second theme, which considers the following question: Given incomplete preferences, how can one ask the agents for some more information in order to aid in the design of `robust' algorithms? We study this question in the context of the one-sided matching problem and show how even a very small amount of extra information can be used to get much better outcomes overall. In Part II we turn our attention to decision-making in the presence of inaccurate information and look at the following question: How can one design `stable' algorithms, i.e., ones that do not produce vastly different outcomes as long as there are only small inaccuracies in a stakeholder's report of their preferences? We study this in the context of fair allocation of indivisible goods among two agents and show how, in contrast to popular fair allocation algorithms, there are alternative algorithms that are fair and approximately-stable
    corecore