4,568 research outputs found
Scalable Robust Kidney Exchange
In barter exchanges, participants directly trade their endowed goods in a
constrained economic setting without money. Transactions in barter exchanges
are often facilitated via a central clearinghouse that must match participants
even in the face of uncertainty---over participants, existence and quality of
potential trades, and so on. Leveraging robust combinatorial optimization
techniques, we address uncertainty in kidney exchange, a real-world barter
market where patients swap (in)compatible paired donors. We provide two
scalable robust methods to handle two distinct types of uncertainty in kidney
exchange---over the quality and the existence of a potential match. The latter
case directly addresses a weakness in all stochastic-optimization-based methods
to the kidney exchange clearing problem, which all necessarily require explicit
estimates of the probability of a transaction existing---a still-unsolved
problem in this nascent market. We also propose a novel, scalable kidney
exchange formulation that eliminates the need for an exponential-time
constraint generation process in competing formulations, maintains provable
optimality, and serves as a subsolver for our robust approach. For each type of
uncertainty we demonstrate the benefits of robustness on real data from a
large, fielded kidney exchange in the United States. We conclude by drawing
parallels between robustness and notions of fairness in the kidney exchange
setting.Comment: Presented at AAAI1
GP-SUM. Gaussian Processes Filtering of non-Gaussian Beliefs
This work studies the problem of stochastic dynamic filtering and state
propagation with complex beliefs. The main contribution is GP-SUM, a filtering
algorithm tailored to dynamic systems and observation models expressed as
Gaussian Processes (GP), and to states represented as a weighted sum of
Gaussians. The key attribute of GP-SUM is that it does not rely on
linearizations of the dynamic or observation models, or on unimodal Gaussian
approximations of the belief, hence enables tracking complex state
distributions. The algorithm can be seen as a combination of a sampling-based
filter with a probabilistic Bayes filter. On the one hand, GP-SUM operates by
sampling the state distribution and propagating each sample through the dynamic
system and observation models. On the other hand, it achieves effective
sampling and accurate probabilistic propagation by relying on the GP form of
the system, and the sum-of-Gaussian form of the belief. We show that GP-SUM
outperforms several GP-Bayes and Particle Filters on a standard benchmark. We
also demonstrate its use in a pushing task, predicting with experimental
accuracy the naturally occurring non-Gaussian distributions.Comment: WAFR 2018, 16 pages, 7 figure
Unsupervised Learning of Visual Structure using Predictive Generative Networks
The ability to predict future states of the environment is a central pillar
of intelligence. At its core, effective prediction requires an internal model
of the world and an understanding of the rules by which the world changes.
Here, we explore the internal models developed by deep neural networks trained
using a loss based on predicting future frames in synthetic video sequences,
using a CNN-LSTM-deCNN framework. We first show that this architecture can
achieve excellent performance in visual sequence prediction tasks, including
state-of-the-art performance in a standard 'bouncing balls' dataset (Sutskever
et al., 2009). Using a weighted mean-squared error and adversarial loss
(Goodfellow et al., 2014), the same architecture successfully extrapolates
out-of-the-plane rotations of computer-generated faces. Furthermore, despite
being trained end-to-end to predict only pixel-level information, our
Predictive Generative Networks learn a representation of the latent structure
of the underlying three-dimensional objects themselves. Importantly, we find
that this representation is naturally tolerant to object transformations, and
generalizes well to new tasks, such as classification of static images. Similar
models trained solely with a reconstruction loss fail to generalize as
effectively. We argue that prediction can serve as a powerful unsupervised loss
for learning rich internal representations of high-level object features.Comment: under review as conference paper at ICLR 201
- …