30,340 research outputs found
Exponential convergence of testing error for stochastic gradient methods
We consider binary classification problems with positive definite kernels and
square loss, and study the convergence rates of stochastic gradient methods. We
show that while the excess testing loss (squared loss) converges slowly to zero
as the number of observations (and thus iterations) goes to infinity, the
testing error (classification error) converges exponentially fast if low-noise
conditions are assumed
A continuous-time analysis of distributed stochastic gradient
We analyze the effect of synchronization on distributed stochastic gradient
algorithms. By exploiting an analogy with dynamical models of biological quorum
sensing -- where synchronization between agents is induced through
communication with a common signal -- we quantify how synchronization can
significantly reduce the magnitude of the noise felt by the individual
distributed agents and by their spatial mean. This noise reduction is in turn
associated with a reduction in the smoothing of the loss function imposed by
the stochastic gradient approximation. Through simulations on model non-convex
objectives, we demonstrate that coupling can stabilize higher noise levels and
improve convergence. We provide a convergence analysis for strongly convex
functions by deriving a bound on the expected deviation of the spatial mean of
the agents from the global minimizer for an algorithm based on quorum sensing,
the same algorithm with momentum, and the Elastic Averaging SGD (EASGD)
algorithm. We discuss extensions to new algorithms which allow each agent to
broadcast its current measure of success and shape the collective computation
accordingly. We supplement our theoretical analysis with numerical experiments
on convolutional neural networks trained on the CIFAR-10 dataset, where we note
a surprising regularizing property of EASGD even when applied to the
non-distributed case. This observation suggests alternative second-order
in-time algorithms for non-distributed optimization that are competitive with
momentum methods.Comment: 9/14/19 : Final version, accepted for publication in Neural
Computation. 4/7/19 : Significant edits: addition of simulations, deep
network results, and revisions throughout. 12/28/18: Initial submissio
Kernel Exponential Family Estimation via Doubly Dual Embedding
We investigate penalized maximum log-likelihood estimation for exponential
family distributions whose natural parameter resides in a reproducing kernel
Hilbert space. Key to our approach is a novel technique, doubly dual embedding,
that avoids computation of the partition function. This technique also allows
the development of a flexible sampling strategy that amortizes the cost of
Monte-Carlo sampling in the inference stage. The resulting estimator can be
easily generalized to kernel conditional exponential families. We establish a
connection between kernel exponential family estimation and MMD-GANs, revealing
a new perspective for understanding GANs. Compared to the score matching based
estimators, the proposed method improves both memory and time efficiency while
enjoying stronger statistical properties, such as fully capturing smoothness in
its statistical convergence rate while the score matching estimator appears to
saturate. Finally, we show that the proposed estimator empirically outperforms
state-of-the-artComment: 22 pages, 20 figures; AISTATS 201
High-Order Stochastic Gradient Thermostats for Bayesian Learning of Deep Models
Learning in deep models using Bayesian methods has generated significant
attention recently. This is largely because of the feasibility of modern
Bayesian methods to yield scalable learning and inference, while maintaining a
measure of uncertainty in the model parameters. Stochastic gradient MCMC
algorithms (SG-MCMC) are a family of diffusion-based sampling methods for
large-scale Bayesian learning. In SG-MCMC, multivariate stochastic gradient
thermostats (mSGNHT) augment each parameter of interest, with a momentum and a
thermostat variable to maintain stationary distributions as target posterior
distributions. As the number of variables in a continuous-time diffusion
increases, its numerical approximation error becomes a practical bottleneck, so
better use of a numerical integrator is desirable. To this end, we propose use
of an efficient symmetric splitting integrator in mSGNHT, instead of the
traditional Euler integrator. We demonstrate that the proposed scheme is more
accurate, robust, and converges faster. These properties are demonstrated to be
desirable in Bayesian deep learning. Extensive experiments on two canonical
models and their deep extensions demonstrate that the proposed scheme improves
general Bayesian posterior sampling, particularly for deep models.Comment: AAAI 201
Constant Step Size Stochastic Gradient Descent for Probabilistic Modeling
Stochastic gradient methods enable learning probabilistic models from large
amounts of data. While large step-sizes (learning rates) have shown to be best
for least-squares (e.g., Gaussian noise) once combined with parameter
averaging, these are not leading to convergent algorithms in general. In this
paper, we consider generalized linear models, that is, conditional models based
on exponential families. We propose averaging moment parameters instead of
natural parameters for constant-step-size stochastic gradient descent. For
finite-dimensional models, we show that this can sometimes (and surprisingly)
lead to better predictions than the best linear model. For infinite-dimensional
models, we show that it always converges to optimal predictions, while
averaging natural parameters never does. We illustrate our findings with
simulations on synthetic data and classical benchmarks with many observations.Comment: Published in Proc. UAI 2018, was accepted as oral presentation Camera
read
- …