106,584 research outputs found
Mechanism design with maxmin agents: Theory and an application to bilateral trade
This paper studies mechanism design when agents are maxmin expected utility maximizers. A first result gives a general necessary condition for a social choice rule to be implementable. The condition combines an inequality version of the standard envelope characterization of payoffs in quasilinear environments with an approach for relating agents' maxmin expected utilities to their objective expected utilities under any common prior. The condition is then applied to give an exact characterization of when efficient trade is possible in the bilateral trading problem of Myerson and Satterthwaite, 1983, under the assumption that agents know little beyond each other's expected valuation of the good (which is the information structure that emerges when agents are uncertain about each other's ability to acquire information). Whenever efficient trade is possible, it may be implemented by a relatively simple double auction format. Sometimes, an extremely simple reference price rule can also implement efficient trade
Fairness in Federated Learning via Core-Stability
Federated learning provides an effective paradigm to jointly optimize a model
benefited from rich distributed data while protecting data privacy.
Nonetheless, the heterogeneity nature of distributed data makes it challenging
to define and ensure fairness among local agents. For instance, it is
intuitively "unfair" for agents with data of high quality to sacrifice their
performance due to other agents with low quality data. Currently popular
egalitarian and weighted equity-based fairness measures suffer from the
aforementioned pitfall. In this work, we aim to formally represent this problem
and address these fairness issues using concepts from co-operative game theory
and social choice theory. We model the task of learning a shared predictor in
the federated setting as a fair public decision making problem, and then define
the notion of core-stable fairness: Given agents, there is no subset of
agents that can benefit significantly by forming a coalition among
themselves based on their utilities and (i.e., ). Core-stable predictors are robust to low quality local data from
some agents, and additionally they satisfy Proportionality and
Pareto-optimality, two well sought-after fairness and efficiency notions within
social choice. We then propose an efficient federated learning protocol CoreFed
to optimize a core stable predictor. CoreFed determines a core-stable predictor
when the loss functions of the agents are convex. CoreFed also determines
approximate core-stable predictors when the loss functions are not convex, like
smooth neural networks. We further show the existence of core-stable predictors
in more general settings using Kakutani's fixed point theorem. Finally, we
empirically validate our analysis on two real-world datasets, and we show that
CoreFed achieves higher core-stability fairness than FedAvg while having
similar accuracy.Comment: NeurIPS 2022; code:
https://openreview.net/attachment?id=lKULHf7oFDo&name=supplementary_materia
Algorithm Design for Ordinal Settings
Social choice theory is concerned with aggregating the preferences of agents into a single outcome. While it is natural to assume that agents have cardinal utilities, in many contexts, we can only assume access to the agents’ ordinal preferences, or rankings over the outcomes. As ordinal preferences are not as expressive as cardinal utilities, a loss of efficiency is unavoidable. Procaccia and Rosenschein (2006) introduced the notion of distortion to quantify this worst-case efficiency loss for a given social choice function.
We primarily study distortion in the context of election, or equivalently clustering problems, where we are given a set of agents and candidates in a metric space; each agent has a preference ranking over the set of candidates and we wish to elect a committee of k candidates that minimizes the total social cost incurred by the agents.
In the single-winner setting (when k = 1), we give a novel LP-duality based analysis framework that makes it easier to analyze the distortion of existing social choice functions, and extends readily to randomized social choice functions. Using this framework, we show that it is possible to give simpler proofs of known results. We also show how to efficiently compute an optimal randomized social choice function for any given instance. We utilize the latter result to obtain an instance for which any randomized social choice function has distortion at least 2.063164. This disproves the long-standing conjecture that there exists a randomized social choice function that has a worst-case distortion of at most 2.
When k is at least 2, it is not possible to compute an O(1)-distortion committee using purely ordinal information. We develop two O(1)-distortion mechanisms for this problem: one having a polylog(n) (per agent) query complexity, where n is the number of agents; and the other having O(k) query complexity (i.e., no dependence on n). We also study a much more general setting called minimum-norm k-clustering recently proposed in the clustering literature, where the objective is some monotone, symmetric norm of the the agents' costs, and we wish to find a committee of k candidates to minimize this objective. When the norm is the sum of the p largest costs, which is called the p-centrum problem in the clustering literature, we give low-distortion mechanisms by adapting our mechanisms for k-median. En route, we give a simple adaptive-sampling algorithm for this problem. Finally, we show how to leverage this adaptive-sampling idea to also obtain a constant-factor bicriteria approximation algorithm for minimum-norm k-clustering (in its full generality)
Social Welfare in One-sided Matching Markets without Money
We study social welfare in one-sided matching markets where the goal is to
efficiently allocate n items to n agents that each have a complete, private
preference list and a unit demand over the items. Our focus is on allocation
mechanisms that do not involve any monetary payments. We consider two natural
measures of social welfare: the ordinal welfare factor which measures the
number of agents that are at least as happy as in some unknown, arbitrary
benchmark allocation, and the linear welfare factor which assumes an agent's
utility linearly decreases down his preference lists, and measures the total
utility to that achieved by an optimal allocation. We analyze two matching
mechanisms which have been extensively studied by economists. The first
mechanism is the random serial dictatorship (RSD) where agents are ordered in
accordance with a randomly chosen permutation, and are successively allocated
their best choice among the unallocated items. The second mechanism is the
probabilistic serial (PS) mechanism of Bogomolnaia and Moulin [8], which
computes a fractional allocation that can be expressed as a convex combination
of integral allocations. The welfare factor of a mechanism is the infimum over
all instances. For RSD, we show that the ordinal welfare factor is
asymptotically 1/2, while the linear welfare factor lies in the interval [.526,
2/3]. For PS, we show that the ordinal welfare factor is also 1/2 while the
linear welfare factor is roughly 2/3. To our knowledge, these results are the
first non-trivial performance guarantees for these natural mechanisms
Finding a Collective Set of Items: From Proportional Multirepresentation to Group Recommendation
We consider the following problem: There is a set of items (e.g., movies) and
a group of agents (e.g., passengers on a plane); each agent has some intrinsic
utility for each of the items. Our goal is to pick a set of items that
maximize the total derived utility of all the agents (i.e., in our example we
are to pick movies that we put on the plane's entertainment system).
However, the actual utility that an agent derives from a given item is only a
fraction of its intrinsic one, and this fraction depends on how the agent ranks
the item among the chosen, available, ones. We provide a formal specification
of the model and provide concrete examples and settings where it is applicable.
We show that the problem is hard in general, but we show a number of
tractability results for its natural special cases
- …