17,025 research outputs found
Kernel Exponential Family Estimation via Doubly Dual Embedding
We investigate penalized maximum log-likelihood estimation for exponential
family distributions whose natural parameter resides in a reproducing kernel
Hilbert space. Key to our approach is a novel technique, doubly dual embedding,
that avoids computation of the partition function. This technique also allows
the development of a flexible sampling strategy that amortizes the cost of
Monte-Carlo sampling in the inference stage. The resulting estimator can be
easily generalized to kernel conditional exponential families. We establish a
connection between kernel exponential family estimation and MMD-GANs, revealing
a new perspective for understanding GANs. Compared to the score matching based
estimators, the proposed method improves both memory and time efficiency while
enjoying stronger statistical properties, such as fully capturing smoothness in
its statistical convergence rate while the score matching estimator appears to
saturate. Finally, we show that the proposed estimator empirically outperforms
state-of-the-artComment: 22 pages, 20 figures; AISTATS 201
Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server
This paper makes two contributions to Bayesian machine learning algorithms.
Firstly, we propose stochastic natural gradient expectation propagation (SNEP),
a novel alternative to expectation propagation (EP), a popular variational
inference algorithm. SNEP is a black box variational algorithm, in that it does
not require any simplifying assumptions on the distribution of interest, beyond
the existence of some Monte Carlo sampler for estimating the moments of the EP
tilted distributions. Further, as opposed to EP which has no guarantee of
convergence, SNEP can be shown to be convergent, even when using Monte Carlo
moment estimates. Secondly, we propose a novel architecture for distributed
Bayesian learning which we call the posterior server. The posterior server
allows scalable and robust Bayesian learning in cases where a data set is
stored in a distributed manner across a cluster, with each compute node
containing a disjoint subset of data. An independent Monte Carlo sampler is run
on each compute node, with direct access only to the local data subset, but
which targets an approximation to the global posterior distribution given all
data across the whole cluster. This is achieved by using a distributed
asynchronous implementation of SNEP to pass messages across the cluster. We
demonstrate SNEP and the posterior server on distributed Bayesian learning of
logistic regression and neural networks.
Keywords: Distributed Learning, Large Scale Learning, Deep Learning, Bayesian
Learn- ing, Variational Inference, Expectation Propagation, Stochastic
Approximation, Natural Gradient, Markov chain Monte Carlo, Parameter Server,
Posterior Server.Comment: 37 pages, 7 figure
Invariance of Weight Distributions in Rectified MLPs
An interesting approach to analyzing neural networks that has received
renewed attention is to examine the equivalent kernel of the neural network.
This is based on the fact that a fully connected feedforward network with one
hidden layer, a certain weight distribution, an activation function, and an
infinite number of neurons can be viewed as a mapping into a Hilbert space. We
derive the equivalent kernels of MLPs with ReLU or Leaky ReLU activations for
all rotationally-invariant weight distributions, generalizing a previous result
that required Gaussian weight distributions. Additionally, the Central Limit
Theorem is used to show that for certain activation functions, kernels
corresponding to layers with weight distributions having mean and finite
absolute third moment are asymptotically universal, and are well approximated
by the kernel corresponding to layers with spherical Gaussian weights. In deep
networks, as depth increases the equivalent kernel approaches a pathological
fixed point, which can be used to argue why training randomly initialized
networks can be difficult. Our results also have implications for weight
initialization.Comment: ICML 201
- …