5,666 research outputs found
Bayesian Dropout
Dropout has recently emerged as a powerful and simple method for training
neural networks preventing co-adaptation by stochastically omitting neurons.
Dropout is currently not grounded in explicit modelling assumptions which so
far has precluded its adoption in Bayesian modelling. Using Bayesian entropic
reasoning we show that dropout can be interpreted as optimal inference under
constraints. We demonstrate this on an analytically tractable regression model
providing a Bayesian interpretation of its mechanism for regularizing and
preventing co-adaptation as well as its connection to other Bayesian
techniques. We also discuss two general approximate techniques for applying
Bayesian dropout for general models, one based on an analytical approximation
and the other on stochastic variational techniques. These techniques are then
applied to a Baysian logistic regression problem and are shown to improve
performance as the model become more misspecified. Our framework roots dropout
as a theoretically justified and practical tool for statistical modelling
allowing Bayesians to tap into the benefits of dropout training.Comment: 21 pages, 3 figures. Manuscript prepared 2014 and awaiting submissio
Gradient descent learning in and out of equilibrium
Relations between the off thermal equilibrium dynamical process of on-line
learning and the thermally equilibrated off-line learning are studied for
potential gradient descent learning. The approach of Opper to study on-line
Bayesian algorithms is extended to potential based or maximum likelihood
learning. We look at the on-line learning algorithm that best approximates the
off-line algorithm in the sense of least Kullback-Leibler information loss. It
works by updating the weights along the gradient of an effective potential
different from the parent off-line potential. The interpretation of this off
equilibrium dynamics holds some similarities to the cavity approach of
Griniasty. We are able to analyze networks with non-smooth transfer functions
and transfer the smoothness requirement to the potential.Comment: 08 pages, submitted to the Journal of Physics
A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition
This study introduces PV-RNN, a novel variational RNN inspired by the
predictive-coding ideas. The model learns to extract the probabilistic
structures hidden in fluctuating temporal patterns by dynamically changing the
stochasticity of its latent states. Its architecture attempts to address two
major concerns of variational Bayes RNNs: how can latent variables learn
meaningful representations and how can the inference model transfer future
observations to the latent variables. PV-RNN does both by introducing adaptive
vectors mirroring the training data, whose values can then be adapted
differently during evaluation. Moreover, prediction errors during
backpropagation, rather than external inputs during the forward computation,
are used to convey information to the network about the external data. For
testing, we introduce error regression for predicting unseen sequences as
inspired by predictive coding that leverages those mechanisms. The model
introduces a weighting parameter, the meta-prior, to balance the optimization
pressure placed on two terms of a lower bound on the marginal likelihood of the
sequential data. We test the model on two datasets with probabilistic
structures and show that with high values of the meta-prior the network
develops deterministic chaos through which the data's randomness is imitated.
For low values, the model behaves as a random process. The network performs
best on intermediate values, and is able to capture the latent probabilistic
structure with good generalization. Analyzing the meta-prior's impact on the
network allows to precisely study the theoretical value and practical benefits
of incorporating stochastic dynamics in our model. We demonstrate better
prediction performance on a robot imitation task with our model using error
regression compared to a standard variational Bayes model lacking such a
procedure.Comment: The paper is accepted in Neural Computatio
VIME: Variational Information Maximizing Exploration
Scalable and effective exploration remains a key challenge in reinforcement
learning (RL). While there are methods with optimality guarantees in the
setting of discrete state and action spaces, these methods cannot be applied in
high-dimensional deep RL scenarios. As such, most contemporary RL relies on
simple heuristics such as epsilon-greedy exploration or adding Gaussian noise
to the controls. This paper introduces Variational Information Maximizing
Exploration (VIME), an exploration strategy based on maximization of
information gain about the agent's belief of environment dynamics. We propose a
practical implementation, using variational inference in Bayesian neural
networks which efficiently handles continuous state and action spaces. VIME
modifies the MDP reward function, and can be applied with several different
underlying RL algorithms. We demonstrate that VIME achieves significantly
better performance compared to heuristic exploration methods across a variety
of continuous control tasks and algorithms, including tasks with very sparse
rewards.Comment: Published in Advances in Neural Information Processing Systems 29
(NIPS), pages 1109-111
Emergence of Invariance and Disentanglement in Deep Representations
Using established principles from Statistics and Information Theory, we show
that invariance to nuisance factors in a deep neural network is equivalent to
information minimality of the learned representation, and that stacking layers
and injecting noise during training naturally bias the network towards learning
invariant representations. We then decompose the cross-entropy loss used during
training and highlight the presence of an inherent overfitting term. We propose
regularizing the loss by bounding such a term in two equivalent ways: One with
a Kullbach-Leibler term, which relates to a PAC-Bayes perspective; the other
using the information in the weights as a measure of complexity of a learned
model, yielding a novel Information Bottleneck for the weights. Finally, we
show that invariance and independence of the components of the representation
learned by the network are bounded above and below by the information in the
weights, and therefore are implicitly optimized during training. The theory
enables us to quantify and predict sharp phase transitions between underfitting
and overfitting of random labels when using our regularized loss, which we
verify in experiments, and sheds light on the relation between the geometry of
the loss function, invariance properties of the learned representation, and
generalization error.Comment: Deep learning, neural network, representation, flat minima,
information bottleneck, overfitting, generalization, sufficiency, minimality,
sensitivity, information complexity, stochastic gradient descent,
regularization, total correlation, PAC-Baye
- …