5,045 research outputs found
Automatic Variational Inference in Stan
Variational inference is a scalable technique for approximate Bayesian
inference. Deriving variational inference algorithms requires tedious
model-specific calculations; this makes it difficult to automate. We propose an
automatic variational inference algorithm, automatic differentiation
variational inference (ADVI). The user only provides a Bayesian model and a
dataset; nothing else. We make no conjugacy assumptions and support a broad
class of models. The algorithm automatically determines an appropriate
variational family and optimizes the variational objective. We implement ADVI
in Stan (code available now), a probabilistic programming framework. We compare
ADVI to MCMC sampling across hierarchical generalized linear models,
nonconjugate matrix factorization, and a mixture model. We train the mixture
model on a quarter million images. With ADVI we can use variational inference
on any model we write in Stan
Automatic Differentiation Variational Inference
Probabilistic modeling is iterative. A scientist posits a simple model, fits
it to her data, refines it according to her analysis, and repeats. However,
fitting complex models to large data is a bottleneck in this process. Deriving
algorithms for new models can be both mathematically and computationally
challenging, which makes it difficult to efficiently cycle through the steps.
To this end, we develop automatic differentiation variational inference (ADVI).
Using our method, the scientist only provides a probabilistic model and a
dataset, nothing else. ADVI automatically derives an efficient variational
inference algorithm, freeing the scientist to refine and explore many models.
ADVI supports a broad class of models-no conjugacy assumptions are required. We
study ADVI across ten different models and apply it to a dataset with millions
of observations. ADVI is integrated into Stan, a probabilistic programming
system; it is available for immediate use
Linear State-Space Model with Time-Varying Dynamics
This paper introduces a linear state-space model with time-varying dynamics.
The time dependency is obtained by forming the state dynamics matrix as a
time-varying linear combination of a set of matrices. The time dependency of
the weights in the linear combination is modelled by another linear Gaussian
dynamical model allowing the model to learn how the dynamics of the process
changes. Previous approaches have used switching models which have a small set
of possible state dynamics matrices and the model selects one of those matrices
at each time, thus jumping between them. Our model forms the dynamics as a
linear combination and the changes can be smooth and more continuous. The model
is motivated by physical processes which are described by linear partial
differential equations whose parameters vary in time. An example of such a
process could be a temperature field whose evolution is driven by a varying
wind direction. The posterior inference is performed using variational Bayesian
approximation. The experiments on stochastic advection-diffusion processes and
real-world weather processes show that the model with time-varying dynamics can
outperform previously introduced approaches.Comment: The final publication is available at Springer via
http://dx.doi.org/10.1007/978-3-662-44851-9_2
Bayesian Deep Net GLM and GLMM
Deep feedforward neural networks (DFNNs) are a powerful tool for functional
approximation. We describe flexible versions of generalized linear and
generalized linear mixed models incorporating basis functions formed by a DFNN.
The consideration of neural networks with random effects is not widely used in
the literature, perhaps because of the computational challenges of
incorporating subject specific parameters into already complex models.
Efficient computational methods for high-dimensional Bayesian inference are
developed using Gaussian variational approximation, with a parsimonious but
flexible factor parametrization of the covariance matrix. We implement natural
gradient methods for the optimization, exploiting the factor structure of the
variational covariance matrix in computation of the natural gradient. Our
flexible DFNN models and Bayesian inference approach lead to a regression and
classification method that has a high prediction accuracy, and is able to
quantify the prediction uncertainty in a principled and convenient way. We also
describe how to perform variable selection in our deep learning method. The
proposed methods are illustrated in a wide range of simulated and real-data
examples, and the results compare favourably to a state of the art flexible
regression and classification method in the statistical literature, the
Bayesian additive regression trees (BART) method. User-friendly software
packages in Matlab, R and Python implementing the proposed methods are
available at https://github.com/VBayesLabComment: 35 pages, 7 figure, 10 table
Lightweight Probabilistic Deep Networks
Even though probabilistic treatments of neural networks have a long history,
they have not found widespread use in practice. Sampling approaches are often
too slow already for simple networks. The size of the inputs and the depth of
typical CNN architectures in computer vision only compound this problem.
Uncertainty in neural networks has thus been largely ignored in practice,
despite the fact that it may provide important information about the
reliability of predictions and the inner workings of the network. In this
paper, we introduce two lightweight approaches to making supervised learning
with probabilistic deep networks practical: First, we suggest probabilistic
output layers for classification and regression that require only minimal
changes to existing networks. Second, we employ assumed density filtering and
show that activation uncertainties can be propagated in a practical fashion
through the entire network, again with minor changes. Both probabilistic
networks retain the predictive power of the deterministic counterpart, but
yield uncertainties that correlate well with the empirical error induced by
their predictions. Moreover, the robustness to adversarial examples is
significantly increased.Comment: To appear at CVPR 201
Multi-view Learning as a Nonparametric Nonlinear Inter-Battery Factor Analysis
Factor analysis aims to determine latent factors, or traits, which summarize
a given data set. Inter-battery factor analysis extends this notion to multiple
views of the data. In this paper we show how a nonlinear, nonparametric version
of these models can be recovered through the Gaussian process latent variable
model. This gives us a flexible formalism for multi-view learning where the
latent variables can be used both for exploratory purposes and for learning
representations that enable efficient inference for ambiguous estimation tasks.
Learning is performed in a Bayesian manner through the formulation of a
variational compression scheme which gives a rigorous lower bound on the log
likelihood. Our Bayesian framework provides strong regularization during
training, allowing the structure of the latent space to be determined
efficiently and automatically. We demonstrate this by producing the first (to
our knowledge) published results of learning from dozens of views, even when
data is scarce. We further show experimental results on several different types
of multi-view data sets and for different kinds of tasks, including exploratory
data analysis, generation, ambiguity modelling through latent priors and
classification.Comment: 49 pages including appendi
- …