8 research outputs found
Adapting a Kidney Exchange Algorithm to Align with Human Values
The efficient and fair allocation of limited resources is a classical problem
in economics and computer science. In kidney exchanges, a central market maker
allocates living kidney donors to patients in need of an organ. Patients and
donors in kidney exchanges are prioritized using ad-hoc weights decided on by
committee and then fed into an allocation algorithm that determines who gets
what--and who does not. In this paper, we provide an end-to-end methodology for
estimating weights of individual participant profiles in a kidney exchange. We
first elicit from human subjects a list of patient attributes they consider
acceptable for the purpose of prioritizing patients (e.g., medical
characteristics, lifestyle choices, and so on). Then, we ask subjects
comparison queries between patient profiles and estimate weights in a
principled way from their responses. We show how to use these weights in kidney
exchange market clearing algorithms. We then evaluate the impact of the weights
in simulations and find that the precise numerical values of the weights we
computed matter little, other than the ordering of profiles that they imply.
However, compared to not prioritizing patients at all, there is a significant
effect, with certain classes of patients being (de)prioritized based on the
human-elicited value judgments
Game theoretical analysis of Kidney Exchange Programs
The goal of a kidney exchange program (KEP) is to maximize number of
transplants within a pool of incompatible patient-donor pairs by exchanging
donors. A KEP can be modelled as a maximum matching problem in a graph. A KEP
between incompatible patient-donor from pools of several hospitals, regions or
countries has the potential to increase the number of transplants. These
entities aim is to maximize the transplant benefit for their patients, which
can lead to strategic behaviours. Recently, this was formulated as a
non-cooperative two-player game and the game solutions (equilibria) were
characterized when the entities objective function is the number of their
patients receiving a kidney. In this paper, we generalize these results for
-players and discuss the impact in the game solutions when transplant
information quality is introduced. Furthermore, the game theory model is
analyzed through computational experiments on instances generated through the
Canada Kidney Paired Donation Program. These experiments highlighting the
importance of using the concept of Nash equilibrium, as well as, the
anticipation of the necessity to further research for supporting police makers
once measures on transplant quality are available
ALGORITHMS FOR MARKETS: MATCHING AND PRICING
In their most basic form \emph{markets} consist of a collection of resources (goods or services) and a set of agents interested in obtaining them. This thesis is a stepping stone toward answering the most central question in the Econ/CS literature surrounding markets: How should the resources be allocated to the interested parties? The first contribution of this thesis is designing pricing algorithms for modern monetary markets (such as advertising markets) in which resources are sold via auctions. The second contribution is designing matching algorithms for markets in which money often plays little to no role (i.e., matching markets).
Auctions have become the standard method of allocating resources in monetary markets, and when it comes to multi-unit auctions Vickrey–Clarke–Groves (VCG) with {\em reserve prices} is one of the most well-known and widely used auctions. A reserve price is a minimum price with which the auctioneer is willing to sell the item. In this thesis, we consider optimizing {\em personalized reserve prices} which are crucial for obtaining a high revenue. To that end, we take a \emph{data-driven} approach where given the buyers' bids in a set of auctions, the goal is to find a single vector of reserve prices (one for each buyer) that maximizes the total revenue across all these auctions. This problem is shown to be NP-hard, and the best-known algorithm for that achieves a fraction of the optimal revenue. We first present an LP-based algorithm with a approximation factor for single-item environments. We then show that this approach can be generalized to get a -approximation for general multi-unit environments. To achieve these results we develop novel LP-rounding procedures which may be of independent interest.
Matching markets have long held a central place in the mechanism design literature. Examples include kidney exchange, labor markets, and dating platforms. When it comes to designing algorithms for these markets, the presence of uncertainty is a common challenge. This uncertainty is often due to the stochastic nature of the data or restrictions that result in limited access to information. In this thesis, we study the {\em stochastic matching} problem in which the goal is to find a large matching of a graph whose edges are uncertain but can be accessed via queries. Particularly, we only know the existence probability of each edge but to verify their existence, we need to perform costly queries. Since these queries are costly, our goal is to find a large matching with only a few (a constant number of) queries. For instance, in labor markets, the existence of an edge between a freelancer and an employer represents their compatibility to work with one another, and a query translates to an interview between them which is often a time-consuming process. While this problem has been studied extensively, before our work, the best-known approximation ratio for unweighted graphs was almost , and slightly better than for weighted graphs. In this thesis, we present algorithms that find almost optimal matchings despite the uncertainty in the graph (weighted and unweighted) by conducting only a constant number of queries per vertex