15 research outputs found
Penalties and Rewards for Fair Learning in Paired Kidney Exchange Programs
A kidney exchange program, also called a kidney paired donation program, can
be viewed as a repeated, dynamic trading and allocation mechanism. This
suggests that a dynamic algorithm for transplant exchange selection may have
superior performance in comparison to the repeated use of a static algorithm.
We confirm this hypothesis using a full scale simulation of the Canadian Kidney
Paired Donation Program: learning algorithms, that attempt to learn optimal
patient-donor weights in advance via dynamic simulations, do lead to improved
outcomes. Specifically, our learning algorithms, designed with the objective of
fairness (that is, equity in terms of transplant accessibility across cPRA
groups), also lead to an increased number of transplants and shorter average
waiting times. Indeed, our highest performing learning algorithm improves
egalitarian fairness by 10% whilst also increasing the number of transplants by
6% and decreasing waiting times by 24%. However, our main result is much more
surprising. We find that the most critical factor in determining the
performance of a kidney exchange program is not the judicious assignment of
positive weights (rewards) to patient-donor pairs. Rather, the key factor in
increasing the number of transplants, decreasing waiting times and improving
group fairness is the judicious assignment of a negative weight (penalty) to
the small number of non-directed donors in the kidney exchange program.Comment: Shorter version accepted in WINE 202
A Branch-and-Price Algorithm Enhanced by Decision Diagrams for the Kidney Exchange Problem
Kidney paired donation programs allow patients registered with an
incompatible donor to receive a suitable kidney from another donor, as long as
the latter's co-registered patient, if any, also receives a kidney from a
different donor. The kidney exchange problem (KEP) aims to find an optimal
collection of kidney exchanges taking the form of cycles and chains. Existing
exact solution methods for KEP either are designed for the case where only
cyclic exchanges are considered, or can handle long chains but are scalable as
long as cycles are short. We develop the first decomposition method that is
able to deal with long cycles and long chains for large realistic instances.
More specifically, we propose a branch-and-price framework, in which the
pricing problems are solved (for the first time in packing problems in a
digraph) through multi-valued decision diagrams. Also, we present a new upper
bound on the optimal value of KEP, stronger than the one proposed in the
literature, which is obtained via our master problem. Computational experiments
show superior performance of our method over the state of the art by optimally
solving almost all instances in the PrefLib library for multiple cycle and
chain lengths
ALGORITHMS FOR MARKETS: MATCHING AND PRICING
In their most basic form \emph{markets} consist of a collection of resources (goods or services) and a set of agents interested in obtaining them. This thesis is a stepping stone toward answering the most central question in the Econ/CS literature surrounding markets: How should the resources be allocated to the interested parties? The first contribution of this thesis is designing pricing algorithms for modern monetary markets (such as advertising markets) in which resources are sold via auctions. The second contribution is designing matching algorithms for markets in which money often plays little to no role (i.e., matching markets).
Auctions have become the standard method of allocating resources in monetary markets, and when it comes to multi-unit auctions Vickrey–Clarke–Groves (VCG) with {\em reserve prices} is one of the most well-known and widely used auctions. A reserve price is a minimum price with which the auctioneer is willing to sell the item. In this thesis, we consider optimizing {\em personalized reserve prices} which are crucial for obtaining a high revenue. To that end, we take a \emph{data-driven} approach where given the buyers' bids in a set of auctions, the goal is to find a single vector of reserve prices (one for each buyer) that maximizes the total revenue across all these auctions. This problem is shown to be NP-hard, and the best-known algorithm for that achieves a fraction of the optimal revenue. We first present an LP-based algorithm with a approximation factor for single-item environments. We then show that this approach can be generalized to get a -approximation for general multi-unit environments. To achieve these results we develop novel LP-rounding procedures which may be of independent interest.
Matching markets have long held a central place in the mechanism design literature. Examples include kidney exchange, labor markets, and dating platforms. When it comes to designing algorithms for these markets, the presence of uncertainty is a common challenge. This uncertainty is often due to the stochastic nature of the data or restrictions that result in limited access to information. In this thesis, we study the {\em stochastic matching} problem in which the goal is to find a large matching of a graph whose edges are uncertain but can be accessed via queries. Particularly, we only know the existence probability of each edge but to verify their existence, we need to perform costly queries. Since these queries are costly, our goal is to find a large matching with only a few (a constant number of) queries. For instance, in labor markets, the existence of an edge between a freelancer and an employer represents their compatibility to work with one another, and a query translates to an interview between them which is often a time-consuming process. While this problem has been studied extensively, before our work, the best-known approximation ratio for unweighted graphs was almost , and slightly better than for weighted graphs. In this thesis, we present algorithms that find almost optimal matchings despite the uncertainty in the graph (weighted and unweighted) by conducting only a constant number of queries per vertex
Big Data Analytics and Information Science for Business and Biomedical Applications
The analysis of Big Data in biomedical as well as business and financial research has drawn much attention from researchers worldwide. This book provides a platform for the deep discussion of state-of-the-art statistical methods developed for the analysis of Big Data in these areas. Both applied and theoretical contributions are showcased