148 research outputs found
Scalable Robust Kidney Exchange
In barter exchanges, participants directly trade their endowed goods in a
constrained economic setting without money. Transactions in barter exchanges
are often facilitated via a central clearinghouse that must match participants
even in the face of uncertainty---over participants, existence and quality of
potential trades, and so on. Leveraging robust combinatorial optimization
techniques, we address uncertainty in kidney exchange, a real-world barter
market where patients swap (in)compatible paired donors. We provide two
scalable robust methods to handle two distinct types of uncertainty in kidney
exchange---over the quality and the existence of a potential match. The latter
case directly addresses a weakness in all stochastic-optimization-based methods
to the kidney exchange clearing problem, which all necessarily require explicit
estimates of the probability of a transaction existing---a still-unsolved
problem in this nascent market. We also propose a novel, scalable kidney
exchange formulation that eliminates the need for an exponential-time
constraint generation process in competing formulations, maintains provable
optimality, and serves as a subsolver for our robust approach. For each type of
uncertainty we demonstrate the benefits of robustness on real data from a
large, fielded kidney exchange in the United States. We conclude by drawing
parallels between robustness and notions of fairness in the kidney exchange
setting.Comment: Presented at AAAI1
On the Generalizability and Predictability of Recommender Systems
While other areas of machine learning have seen more and more automation,
designing a high-performing recommender system still requires a high level of
human effort. Furthermore, recent work has shown that modern recommender system
algorithms do not always improve over well-tuned baselines. A natural follow-up
question is, "how do we choose the right algorithm for a new dataset and
performance metric?" In this work, we start by giving the first large-scale
study of recommender system approaches by comparing 18 algorithms and 100 sets
of hyperparameters across 85 datasets and 315 metrics. We find that the best
algorithms and hyperparameters are highly dependent on the dataset and
performance metric, however, there are also strong correlations between the
performance of each algorithm and various meta-features of the datasets.
Motivated by these findings, we create RecZilla, a meta-learning approach to
recommender systems that uses a model to predict the best algorithm and
hyperparameters for new, unseen datasets. By using far more meta-training data
than prior work, RecZilla is able to substantially reduce the level of human
involvement when faced with a new recommender system application. We not only
release our code and pretrained RecZilla models, but also all of our raw
experimental results, so that practitioners can train a RecZilla model for
their desired performance metric: https://github.com/naszilla/reczilla.Comment: NeurIPS 202
Artificial Artificial Intelligence: Measuring Influence of AI 'Assessments' on Moral Decision-Making
Given AI's growing role in modeling and improving decision-making, how and
when to present users with feedback is an urgent topic to address. We
empirically examined the effect of feedback from false AI on moral
decision-making about donor kidney allocation. We found some evidence that
judgments about whether a patient should receive a kidney can be influenced by
feedback about participants' own decision-making perceived to be given by AI,
even if the feedback is entirely random. We also discovered different effects
between assessments presented as being from human experts and assessments
presented as being from AI
Indecision Modeling
AI systems are often used to make or contribute to important decisions in a
growing range of applications, including criminal justice, hiring, and
medicine. Since these decisions impact human lives, it is important that the AI
systems act in ways which align with human values. Techniques for preference
modeling and social choice help researchers learn and aggregate peoples'
preferences, which are used to guide AI behavior; thus, it is imperative that
these learned preferences are accurate. These techniques often assume that
people are willing to express strict preferences over alternatives; which is
not true in practice. People are often indecisive, and especially so when their
decision has moral implications. The philosophy and psychology literature shows
that indecision is a measurable and nuanced behavior -- and that there are
several different reasons people are indecisive. This complicates the task of
both learning and aggregating preferences, since most of the relevant
literature makes restrictive assumptions on the meaning of indecision. We begin
to close this gap by formalizing several mathematical \emph{indecision} models
based on theories from philosophy, psychology, and economics; these models can
be used to describe (indecisive) agent decisions, both when they are allowed to
express indecision and when they are not. We test these models using data
collected from an online survey where participants choose how to
(hypothetically) allocate organs to patients waiting for a transplant.Comment: Accepted at AAAI 202
- …