26 research outputs found
Matching under Preferences
Matching theory studies how agents and/or objects from different sets can be matched with each other while taking agents\u2019 preferences into account. The theory originated in 1962 with a celebrated paper by David Gale and Lloyd Shapley (1962), in which they proposed the Stable Marriage Algorithm as a solution to the problem of two-sided matching. Since then, this theory has been successfully applied to many real-world problems such as matching students to universities, doctors to hospitals, kidney transplant patients to donors, and tenants to houses. This chapter will focus on algorithmic as well as strategic issues of matching theory.
Many large-scale centralized allocation processes can be modelled by matching problems where agents have preferences over one another. For example, in China, over 10 million students apply for admission to higher education annually through a centralized process. The inputs to the matching scheme include the students\u2019 preferences over universities, and vice versa, and the capacities of each university. The task is to construct a matching that is in some sense optimal with respect to these inputs.
Economists have long understood the problems with decentralized matching markets, which can suffer from such undesirable properties as unravelling, congestion and exploding offers (see Roth and Xing, 1994, for details). For centralized markets, constructing allocations by hand for large problem instances is clearly infeasible. Thus centralized mechanisms are required for automating the allocation process.
Given the large number of agents typically involved, the computational efficiency of a mechanism's underlying algorithm is of paramount importance. Thus we seek polynomial-time algorithms for the underlying matching problems. Equally important are considerations of strategy: an agent (or a coalition of agents) may manipulate their input to the matching scheme (e.g., by misrepresenting their true preferences or underreporting their capacity) in order to try to improve their outcome. A desirable property of a mechanism is strategyproofness, which ensures that it is in the best interests of an agent to behave truthfully
Manipulating Districts to Win Elections: Fine-Grained Complexity
Gerrymandering is a practice of manipulating district boundaries and
locations in order to achieve a political advantage for a particular party.
Lewenberg, Lev, and Rosenschein [AAMAS 2017] initiated the algorithmic study of
a geographically-based manipulation problem, where voters must vote at the
ballot box closest to them. In this variant of gerrymandering, for a given set
of possible locations of ballot boxes and known political preferences of
voters, the task is to identify locations for boxes out of possible
locations to guarantee victory of a certain party in at least districts.
Here integers and are some selected parameter.
It is known that the problem is NP-complete already for 4 political parties
and prior to our work only heuristic algorithms for this problem were
developed. We initiate the rigorous study of the gerrymandering problem from
the perspectives of parameterized and fine-grained complexity and provide
asymptotically matching lower and upper bounds on its computational complexity.
We prove that the problem is W[1]-hard parameterized by and that it does
not admit an algorithm for any function of
and only, unless Exponential Time Hypothesis (ETH) fails. Our lower
bounds hold already for parties. On the other hand, we give an algorithm
that solves the problem for a constant number of parties in time
.Comment: Presented at AAAI-2
Structural Control in Weighted Voting Games
Inspired by the study of control scenarios in elections and complementing manipulation and bribery settings in cooperative games with transferable utility, we introduce the notion of structural control in weighted voting games. We model two types of influence, adding players to and deleting players from a game, with goals such as increasing a given player\u27s Shapley-Shubik or probabilistic Penrose-Banzhaf index in relation to the original game. We study the computational complexity of the problems of whether such structural changes can achieve the desired effect