436 research outputs found
Bayesian Dark Knowledge
We consider the problem of Bayesian parameter estimation for deep neural
networks, which is important in problem settings where we may have little data,
and/ or where we need accurate posterior predictive densities, e.g., for
applications involving bandits or active learning. One simple approach to this
is to use online Monte Carlo methods, such as SGLD (stochastic gradient
Langevin dynamics). Unfortunately, such a method needs to store many copies of
the parameters (which wastes memory), and needs to make predictions using many
versions of the model (which wastes time).
We describe a method for "distilling" a Monte Carlo approximation to the
posterior predictive density into a more compact form, namely a single deep
neural network. We compare to two very recent approaches to Bayesian neural
networks, namely an approach based on expectation propagation [Hernandez-Lobato
and Adams, 2015] and an approach based on variational Bayes [Blundell et al.,
2015]. Our method performs better than both of these, is much simpler to
implement, and uses less computation at test time.Comment: final version submitted to NIPS 201
Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks
Effective training of deep neural networks suffers from two main issues. The
first is that the parameter spaces of these models exhibit pathological
curvature. Recent methods address this problem by using adaptive
preconditioning for Stochastic Gradient Descent (SGD). These methods improve
convergence by adapting to the local geometry of parameter space. A second
issue is overfitting, which is typically addressed by early stopping. However,
recent work has demonstrated that Bayesian model averaging mitigates this
problem. The posterior can be sampled by using Stochastic Gradient Langevin
Dynamics (SGLD). However, the rapidly changing curvature renders default SGLD
methods inefficient. Here, we propose combining adaptive preconditioners with
SGLD. In support of this idea, we give theoretical properties on asymptotic
convergence and predictive risk. We also provide empirical results for Logistic
Regression, Feedforward Neural Nets, and Convolutional Neural Nets,
demonstrating that our preconditioned SGLD method gives state-of-the-art
performance on these models.Comment: AAAI 201
- …