46 research outputs found
Cake Cutting Algorithms for Piecewise Constant and Piecewise Uniform Valuations
Cake cutting is one of the most fundamental settings in fair division and
mechanism design without money. In this paper, we consider different levels of
three fundamental goals in cake cutting: fairness, Pareto optimality, and
strategyproofness. In particular, we present robust versions of envy-freeness
and proportionality that are not only stronger than their standard
counter-parts but also have less information requirements. We then focus on
cake cutting with piecewise constant valuations and present three desirable
algorithms: CCEA (Controlled Cake Eating Algorithm), MEA (Market Equilibrium
Algorithm) and CSD (Constrained Serial Dictatorship). CCEA is polynomial-time,
robust envy-free, and non-wasteful. It relies on parametric network flows and
recent generalizations of the probabilistic serial algorithm. For the subdomain
of piecewise uniform valuations, we show that it is also group-strategyproof.
Then, we show that there exists an algorithm (MEA) that is polynomial-time,
envy-free, proportional, and Pareto optimal. MEA is based on computing a
market-based equilibrium via a convex program and relies on the results of
Reijnierse and Potters [24] and Devanur et al. [15]. Moreover, we show that MEA
and CCEA are equivalent to mechanism 1 of Chen et. al. [12] for piecewise
uniform valuations. We then present an algorithm CSD and a way to implement it
via randomization that satisfies strategyproofness in expectation, robust
proportionality, and unanimity for piecewise constant valuations. For the case
of two agents, it is robust envy-free, robust proportional, strategyproof, and
polynomial-time. Many of our results extend to more general settings in cake
cutting that allow for variable claims and initial endowments. We also show a
few impossibility results to complement our algorithms.Comment: 39 page
Strategyproof and fair matching mechanism for ratio constraints
We introduce a new type of distributional constraints called ratio constraints, which explicitly specify the required balance among schools in two-sided matching. Since ratio constraints do not belong to the known well-behaved class of constraints called M-convex set, developing a fair and strategyproof mechanism that can handle them is challenging. We develop a novel mechanism called quota reduction deferred acceptance (QRDA), which repeatedly applies the standard DA by sequentially reducing artificially introduced maximum quotas. As well as being fair and strategyproof, QRDA always yields a weakly better matching for students compared to a baseline mechanism called artificial cap deferred acceptance (ACDA), which uses predetermined artificial maximum quotas. Finally, we experimentally show that, in terms of student welfare and nonwastefulness, QRDA outperforms ACDA and another fair and strategyproof mechanism called Extended Seat Deferred Acceptance (ESDA), in which ratio constraints are transformed into minimum and maximum quotas
Recommended from our members
On the Trade-offs between Modeling Power and Algorithmic Complexity
Mathematical modeling is a central component of operations research. Most of the academic research in our field focuses on developing algorithmic tools for solving various mathematical problems arising from our models. However, our procedure for selecting the best model to use in any particular application is ad hoc. This dissertation seeks to rigorously quantify the trade-offs between various design criteria in model construction through a series of case studies. The hope is that a better understanding of the pros and cons of different models (for the same application) can guide and improve the model selection process.
In this dissertation, we focus on two broad types of trade-offs. The first type arises naturally in mechanism or market design, a discipline that focuses on developing optimization models for complex multi-agent systems. Such systems may require satisfying multiple objectives that are potentially in conflict with one another. Hence, finding a solution that simultaneously satisfies several design requirements is challenging. The second type addresses the dynamics between model complexity and computational tractability in the context of approximation algorithms for some discrete optimization problems. The need to study this type of trade-offs is motivated by certain industry problems where the goal is to obtain the best solution within a reasonable time frame. Hence, being able to quantify and compare the degree of sub-optimality of the solution obtained under different models is helpful. Chapters 2-5 of the dissertation focus on trade-offs of the first type and Chapters 6-7 the second type
Computing with strategic agents
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 179-189).This dissertation studies mechanism design for various combinatorial problems in the presence of strategic agents. A mechanism is an algorithm for allocating a resource among a group of participants, each of which has a privately-known value for any particular allocation. A mechanism is truthful if it is in each participant's best interest to reveal his private information truthfully regardless of the strategies of the other participants. First, we explore a competitive auction framework for truthful mechanism design in the setting of multi-unit auctions, or auctions which sell multiple identical copies of a good. In this framework, the goal is to design a truthful auction whose revenue approximates that of an omniscient auction for any set of bids. We focus on two natural settings - the limited demand setting where bidders desire at most a fixed number of copies and the limited budget setting where bidders can spend at most a fixed amount of money. In the limit demand setting, all prior auctions employed the use of randomization in the computation of the allocation and prices.(cont.) Randomization in truthful mechanism design is undesirable because, in arguing the truthfulness of the mechanism, we employ an underlying assumption that the bidders trust the random coin flips of the auctioneer. Despite conjectures to the contrary, we are able to design a technique to derandomize any multi-unit auction in the limited demand case without losing much of the revenue guarantees. We then consider the limited budget case and provide the first competitive auction for this setting, although our auction is randomized. Next, we consider abandoning truthfulness in order to improve the revenue properties of procurement auctions, or auctions that are used to hire a team of agents to complete a task. We study first-price procurement auctions and their variants and argue that in certain settings the payment is never significantly more than, and sometimes much less than, truthful mechanisms. Then we consider the setting of cost-sharing auctions. In a cost-sharing auction, agents bid to receive some service, such as connectivity to the Internet. A subset of agents is then selected for service and charged prices to approximately recover the cost of servicing them.(cont.) We ask what can be achieved by cost -sharing auctions satisfying a strengthening of truthfulness called group-strategyproofness. Group-strategyproofness requires that even coalitions of agents do not have an incentive to report bids other than their true values in the absence of side-payments. For a particular class of such mechanisms, we develop a novel technique based on the probabilistic method for proving bounds on their revenue and use this technique to derive tight or nearly-tight bounds for several combinatorial optimization games. Our results are quite pessimistic, suggesting that for many problems group-strategyproofness is incompatible with revenue goals. Finally, we study centralized two-sided markets, or markets that form a matching between participants based on preference lists. We consider mechanisms that output matching which are stable with respect to the submitted preferences. A matching is stable if no two participants can jointly benefit by breaking away from the assigned matching to form a pair.(cont.) For such mechanisms, we are able to prove that in a certain probabilistic setting each participant's best strategy is truthfulness with high probability (assuming other participants are truthful as well) even though in such markets in general there are provably no truthful mechanisms.by Nicole Immorlica.Ph.D
LP-based Covering Games with Low Price of Anarchy
We present a new class of vertex cover and set cover games. The price of
anarchy bounds match the best known constant factor approximation guarantees
for the centralized optimization problems for linear and also for submodular
costs -- in contrast to all previously studied covering games, where the price
of anarchy cannot be bounded by a constant (e.g. [6, 7, 11, 5, 2]). In
particular, we describe a vertex cover game with a price of anarchy of 2. The
rules of the games capture the structure of the linear programming relaxations
of the underlying optimization problems, and our bounds are established by
analyzing these relaxations. Furthermore, for linear costs we exhibit linear
time best response dynamics that converge to these almost optimal Nash
equilibria. These dynamics mimic the classical greedy approximation algorithm
of Bar-Yehuda and Even [3]
Incentives in One-Sided Matching Problems With Ordinal Preferences
One of the core problems in multiagent systems is how to efficiently allocate a set of indivisible resources to a group of self-interested agents that compete over scarce and limited alternatives. In these settings, mechanism design approaches such as matching mechanisms and auctions are often applied to guarantee fairness and efficiency while preventing agents from manipulating the outcomes. In many multiagent resource allocation problems, the use of monetary transfers or explicit markets are forbidden because of ethical or legal issues. One-sided matching mechanisms exploit various randomization and algorithmic techniques to satisfy certain desirable properties, while incentivizing self-interested agents to report their private preferences truthfully.
In the first part of this thesis, we focus on deterministic and randomized matching mechanisms in one-shot settings. We investigate the class of deterministic matching mechanisms when there is a quota to be fulfilled. Building on past results in artificial intelligence and economics, we show that when preferences are lexicographic, serial dictatorship mechanisms (and their sequential dictatorship counterparts) characterize the set of all possible matching mechanisms with desirable economic properties, enabling social planners to remedy the inherent unfairness in deterministic allocation mechanisms by assigning quotas according to some fairness criteria (such as seniority or priority). Extending the quota mechanisms to randomized settings, we show that this class of mechanisms are envyfree, strategyproof, and ex post efficient for any number of agents and objects and any quota system, proving that the well-studied Random Serial Dictatorship (RSD) is also envyfree in this domain.
The next contribution of this thesis is providing a systemic empirical study of the two widely adopted randomized mechanisms, namely Random Serial Dictatorship (RSD) and the Probabilistic Serial Rule (PS). We investigate various properties of these two mechanisms such as efficiency, strategyproofness, and envyfreeness under various preference assumptions (e.g. general ordinal preferences, lexicographic preferences, and risk attitudes). The empirical findings in this thesis complement the theoretical guarantees of matching mechanisms, shedding light on practical implications of deploying each of the given mechanisms.
In the second part of this thesis, we address the issues of designing truthful matching mechanisms in dynamic settings.
Many multiagent domains require reasoning over time and are inherently dynamic rather than static. We initiate the study of matching problems where agents' private preferences evolve stochastically over time, and decisions have to be made in each period. To adequately evaluate the quality of outcomes in dynamic settings, we propose a generic stochastic decision process and show that, in contrast to static settings, traditional mechanisms are easily manipulable. We introduce a number of properties that we argue are important for matching mechanisms in dynamic settings and propose a new mechanism that maintains a history of pairwise interactions between agents, and adapts the priority orderings of agents in each period based on this history.
We show that our mechanism is globally strategyproof in certain settings (e.g. when there are 2 agents or when the planning horizon is bounded), and even when the mechanism is manipulable, the manipulative actions taken by an agent will often result in a Pareto improvement in general. Thus, we make the argument that while manipulative behavior may still be unavoidable, it is not necessarily at the cost to other agents.
To circumvent the issues of incentive design in dynamic settings, we formulate the dynamic matching problem as a Multiagent MDP where agents have particular underlying utility functions (e.g. linear positional utility functions), and show that the impossibility results still exist in this restricted setting. Nevertheless, we introduce a few classes of problems with restricted preference dynamics for which positive results exist. Finally, we propose an algorithmic solution for agents with single-minded preferences that satisfies strategyproofness, Pareto efficiency, and weak non-bossiness in one-shot settings, and show that even though this mechanism is manipulable in dynamic settings, any unilateral deviation would benefit all participating agents