278 research outputs found
A study of high output two-stroke diesel engines - scavenging, supercharging and compounding
SIGLEAvailable from British Library Document Supply Centre- DSC:DX170889 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Self-Adversarially Learned Bayesian Sampling
Scalable Bayesian sampling is playing an important role in modern machine
learning, especially in the fast-developed unsupervised-(deep)-learning models.
While tremendous progresses have been achieved via scalable Bayesian sampling
such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient
descent (SVGD), the generated samples are typically highly correlated.
Moreover, their sample-generation processes are often criticized to be
inefficient. In this paper, we propose a novel self-adversarial learning
framework that automatically learns a conditional generator to mimic the
behavior of a Markov kernel (transition kernel). High-quality samples can be
efficiently generated by direct forward passes though a learned generator. Most
importantly, the learning process adopts a self-learning paradigm, requiring no
information on existing Markov kernels, e.g., knowledge of how to draw samples
from them. Specifically, our framework learns to use current samples, either
from the generator or pre-provided training data, to update the generator such
that the generated samples progressively approach a target distribution, thus
it is called self-learning. Experiments on both synthetic and real datasets
verify advantages of our framework, outperforming related methods in terms of
both sampling efficiency and sample quality.Comment: AAAI 201
High-Order Stochastic Gradient Thermostats for Bayesian Learning of Deep Models
Learning in deep models using Bayesian methods has generated significant
attention recently. This is largely because of the feasibility of modern
Bayesian methods to yield scalable learning and inference, while maintaining a
measure of uncertainty in the model parameters. Stochastic gradient MCMC
algorithms (SG-MCMC) are a family of diffusion-based sampling methods for
large-scale Bayesian learning. In SG-MCMC, multivariate stochastic gradient
thermostats (mSGNHT) augment each parameter of interest, with a momentum and a
thermostat variable to maintain stationary distributions as target posterior
distributions. As the number of variables in a continuous-time diffusion
increases, its numerical approximation error becomes a practical bottleneck, so
better use of a numerical integrator is desirable. To this end, we propose use
of an efficient symmetric splitting integrator in mSGNHT, instead of the
traditional Euler integrator. We demonstrate that the proposed scheme is more
accurate, robust, and converges faster. These properties are demonstrated to be
desirable in Bayesian deep learning. Extensive experiments on two canonical
models and their deep extensions demonstrate that the proposed scheme improves
general Bayesian posterior sampling, particularly for deep models.Comment: AAAI 201
Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks
Effective training of deep neural networks suffers from two main issues. The
first is that the parameter spaces of these models exhibit pathological
curvature. Recent methods address this problem by using adaptive
preconditioning for Stochastic Gradient Descent (SGD). These methods improve
convergence by adapting to the local geometry of parameter space. A second
issue is overfitting, which is typically addressed by early stopping. However,
recent work has demonstrated that Bayesian model averaging mitigates this
problem. The posterior can be sampled by using Stochastic Gradient Langevin
Dynamics (SGLD). However, the rapidly changing curvature renders default SGLD
methods inefficient. Here, we propose combining adaptive preconditioners with
SGLD. In support of this idea, we give theoretical properties on asymptotic
convergence and predictive risk. We also provide empirical results for Logistic
Regression, Feedforward Neural Nets, and Convolutional Neural Nets,
demonstrating that our preconditioned SGLD method gives state-of-the-art
performance on these models.Comment: AAAI 201
- …