3,642 research outputs found
Auto-Encoding Sequential Monte Carlo
We build on auto-encoding sequential Monte Carlo (AESMC): a method for model
and proposal learning based on maximizing the lower bound to the log marginal
likelihood in a broad family of structured probabilistic models. Our approach
relies on the efficiency of sequential Monte Carlo (SMC) for performing
inference in structured probabilistic models and the flexibility of deep neural
networks to model complex conditional probability distributions. We develop
additional theoretical insights and introduce a new training procedure which
improves both model and proposal learning. We demonstrate that our approach
provides a fast, easy-to-implement and scalable means for simultaneous model
learning and proposal adaptation in deep generative models
Bayesian Optimization for Probabilistic Programs
We present the first general purpose framework for marginal maximum a
posteriori estimation of probabilistic program variables. By using a series of
code transformations, the evidence of any probabilistic program, and therefore
of any graphical model, can be optimized with respect to an arbitrary subset of
its sampled variables. To carry out this optimization, we develop the first
Bayesian optimization package to directly exploit the source code of its
target, leading to innovations in problem-independent hyperpriors, unbounded
optimization, and implicit constraint satisfaction; delivering significant
performance improvements over prominent existing packages. We present
applications of our method to a number of tasks including engineering design
and parameter optimization
Involving your librarian in instruction : or ... how I learned to stop worrying and love my librarian
Presented at 2015 FaCET Conference at UMKC, January 15, 2015Title from PDF, viewed on March 13, 201
- …