3,263 research outputs found
Hamiltonian Monte Carlo Acceleration Using Surrogate Functions with Random Bases
For big data analysis, high computational cost for Bayesian methods often
limits their applications in practice. In recent years, there have been many
attempts to improve computational efficiency of Bayesian inference. Here we
propose an efficient and scalable computational technique for a
state-of-the-art Markov Chain Monte Carlo (MCMC) methods, namely, Hamiltonian
Monte Carlo (HMC). The key idea is to explore and exploit the structure and
regularity in parameter space for the underlying probabilistic model to
construct an effective approximation of its geometric properties. To this end,
we build a surrogate function to approximate the target distribution using
properly chosen random bases and an efficient optimization process. The
resulting method provides a flexible, scalable, and efficient sampling
algorithm, which converges to the correct target distribution. We show that by
choosing the basis functions and optimization process differently, our method
can be related to other approaches for the construction of surrogate functions
such as generalized additive models or Gaussian process models. Experiments
based on simulated and real data show that our approach leads to substantially
more efficient sampling algorithms compared to existing state-of-the art
methods
An Equivalence Between Adaptive Dynamic Programming With a Critic and Backpropagation Through Time
We consider the adaptive dynamic programming technique called Dual Heuristic Programming (DHP), which is designed to learn a critic function, when using learned model functions of the environment. DHP is designed for optimizing control problems in large and continuous state spaces. We extend DHP into a new algorithm that we call Value-Gradient Learning, VGL(λ), and prove equivalence of an instance of the new algorithm to Backpropagation Through Time for Control with a greedy policy. Not only does this equivalence provide a link between these two different approaches, but it also enables our variant of DHP to have guaranteed convergence, under certain smoothness conditions and a greedy policy, when using a general smooth nonlinear function approximator for the critic. We consider several experimental scenarios including some that prove divergence of DHP under a greedy policy, which contrasts against our proven-convergent algorithm
From Continuous Dynamics to Graph Neural Networks: Neural Diffusion and Beyond
Graph neural networks (GNNs) have demonstrated significant promise in
modelling relational data and have been widely applied in various fields of
interest. The key mechanism behind GNNs is the so-called message passing where
information is being iteratively aggregated to central nodes from their
neighbourhood. Such a scheme has been found to be intrinsically linked to a
physical process known as heat diffusion, where the propagation of GNNs
naturally corresponds to the evolution of heat density. Analogizing the process
of message passing to the heat dynamics allows to fundamentally understand the
power and pitfalls of GNNs and consequently informs better model design.
Recently, there emerges a plethora of works that proposes GNNs inspired from
the continuous dynamics formulation, in an attempt to mitigate the known
limitations of GNNs, such as oversmoothing and oversquashing. In this survey,
we provide the first systematic and comprehensive review of studies that
leverage the continuous perspective of GNNs. To this end, we introduce
foundational ingredients for adapting continuous dynamics to GNNs, along with a
general framework for the design of graph neural dynamics. We then review and
categorize existing works based on their driven mechanisms and underlying
dynamics. We also summarize how the limitations of classic GNNs can be
addressed under the continuous framework. We conclude by identifying multiple
open research directions
Implicit regularization and momentum algorithms in nonlinear adaptive control and prediction
Stable concurrent learning and control of dynamical systems is the subject of
adaptive control. Despite being an established field with many practical
applications and a rich theory, much of the development in adaptive control for
nonlinear systems revolves around a few key algorithms. By exploiting strong
connections between classical adaptive nonlinear control techniques and recent
progress in optimization and machine learning, we show that there exists
considerable untapped potential in algorithm development for both adaptive
nonlinear control and adaptive dynamics prediction. We first introduce
first-order adaptation laws inspired by natural gradient descent and mirror
descent. We prove that when there are multiple dynamics consistent with the
data, these non-Euclidean adaptation laws implicitly regularize the learned
model. Local geometry imposed during learning thus may be used to select
parameter vectors - out of the many that will achieve perfect tracking or
prediction - for desired properties such as sparsity. We apply this result to
regularized dynamics predictor and observer design, and as concrete examples
consider Hamiltonian systems, Lagrangian systems, and recurrent neural
networks. We subsequently develop a variational formalism based on the Bregman
Lagrangian to define adaptation laws with momentum applicable to linearly
parameterized systems and to nonlinearly parameterized systems satisfying
monotonicity or convexity requirements. We show that the Euler Lagrange
equations for the Bregman Lagrangian lead to natural gradient and mirror
descent-like adaptation laws with momentum, and we recover their first-order
analogues in the infinite friction limit. We illustrate our analyses with
simulations demonstrating our theoretical results.Comment: v6: cosmetic adjustments to figures 4, 5, and 6. v5: final version,
accepted for publication in Neural Computation. v4: significant updates,
revamped section on dynamics prediction and exploiting structure. v3: new
general theorems and extensions to dynamic prediction. 37 pages, 3 figures.
v2: significant updates; submission read
A continuous-time analysis of distributed stochastic gradient
We analyze the effect of synchronization on distributed stochastic gradient
algorithms. By exploiting an analogy with dynamical models of biological quorum
sensing -- where synchronization between agents is induced through
communication with a common signal -- we quantify how synchronization can
significantly reduce the magnitude of the noise felt by the individual
distributed agents and by their spatial mean. This noise reduction is in turn
associated with a reduction in the smoothing of the loss function imposed by
the stochastic gradient approximation. Through simulations on model non-convex
objectives, we demonstrate that coupling can stabilize higher noise levels and
improve convergence. We provide a convergence analysis for strongly convex
functions by deriving a bound on the expected deviation of the spatial mean of
the agents from the global minimizer for an algorithm based on quorum sensing,
the same algorithm with momentum, and the Elastic Averaging SGD (EASGD)
algorithm. We discuss extensions to new algorithms which allow each agent to
broadcast its current measure of success and shape the collective computation
accordingly. We supplement our theoretical analysis with numerical experiments
on convolutional neural networks trained on the CIFAR-10 dataset, where we note
a surprising regularizing property of EASGD even when applied to the
non-distributed case. This observation suggests alternative second-order
in-time algorithms for non-distributed optimization that are competitive with
momentum methods.Comment: 9/14/19 : Final version, accepted for publication in Neural
Computation. 4/7/19 : Significant edits: addition of simulations, deep
network results, and revisions throughout. 12/28/18: Initial submissio
Differentiable Game Mechanics
Deep learning is built on the foundational guarantee that gradient descent on
an objective function converges to local minima. Unfortunately, this guarantee
fails in settings, such as generative adversarial nets, that exhibit multiple
interacting losses. The behavior of gradient-based methods in games is not well
understood -- and is becoming increasingly important as adversarial and
multi-objective architectures proliferate. In this paper, we develop new tools
to understand and control the dynamics in n-player differentiable games.
The key result is to decompose the game Jacobian into two components. The
first, symmetric component, is related to potential games, which reduce to
gradient descent on an implicit function. The second, antisymmetric component,
relates to Hamiltonian games, a new class of games that obey a conservation law
akin to conservation laws in classical mechanical systems. The decomposition
motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding
stable fixed points in differentiable games. Basic experiments show SGA is
competitive with recently proposed algorithms for finding stable fixed points
in GANs -- while at the same time being applicable to, and having guarantees
in, much more general cases.Comment: JMLR 2019, journal version of arXiv:1802.0564
Steepest descent as Linear Quadratic Regulation
Concorder un modèle à certaines observations, voilà qui résume assez bien ce que l’apprentissage machine cherche à accomplir. Ce concept est maintenant omniprésent dans nos vies, entre autre grâce aux percées récentes en apprentissage profond. La stratégie d’optimisation prédominante pour ces deux domaines est la minimisation d’un objectif donné. Et pour cela, la méthode du gradient, méthode de premier-ordre qui modifie les paramètres du modèle à chaque itération, est l’approche dominante. À l’opposé, les méthodes dites de second ordre n’ont jamais réussi à s’imposer en apprentissage profond. Pourtant, elles offrent des avantages reconnus qui soulèvent encore un grand intérêt. D’où l’importance de la méthode du col, qui unifie les méthodes de premier et second ordre sous un même paradigme.
Dans ce mémoire, nous établissons un parralèle direct entre la méthode du col et le domaine du contrôle optimal ; domaine qui cherche à optimiser mathématiquement une séquence de décisions. Et certains des problèmes les mieux compris et étudiés en contrôle optimal sont les commandes linéaires quadratiques. Problèmes pour lesquels on connaît très bien la solution optimale. Plus spécifiquement, nous démontrerons l’équivalence entre une itération de la méthode du col et la résolution d’une Commande Linéaire Quadratique (CLQ).
Cet éclairage nouveau implique une approche unifiée quand vient le temps de déployer nombre d’algorithmes issus de la méthode du col, tel que la méthode du gradient et celle des gradients naturels, sans être limitée à ceux-ci. Approche que nous étendons ensuite aux problèmes à horizon infini, tel que les modèles à équilibre profond. Ce faisant, nous démontrons pour ces problèmes que calculer les gradients via la différentiation implicite revient à employer l’équation de Riccati pour solutionner la CLQ associée à la méthode du gradient. Finalement, notons que l’incorporation d’information sur la courbure du problème revient généralement à rencontrer une inversion matricielle dans la méthode du col. Nous montrons que l’équivalence avec les CLQ permet de contourner cette inversion en utilisant une approximation issue des séries de Neumann. Surprenamment, certaines observations empiriques suggèrent que cette approximation aide aussi à stabiliser le processus d’optimisation quand des méthodes de second-ordre sont impliquées ; en agissant comme un régularisateur adaptif implicite.Machine learning entails training a model to fit some given observations, and recent advances in the field, particularly in deep learning, have made it omnipresent in our lives. Fitting a model usually requires the minimization of a given objective. When it comes to deep learning, first-order methods like gradient descent have become a default tool for optimization in deep learning. On the other hand, second-order methods did not see widespread use in deep learning. Yet, they hold many promises and are still a very active field of research. An important perspective into both methods is steepest descent, which allows you to encompass first and second-order approaches into the same framework.
In this thesis, we establish an explicit connection between steepest descent and optimal control, a field that tries to optimize sequential decision-making processes. Core to it is the family of problems known as Linear Quadratic Regulation; problems that have been well studied and for which we know optimal solutions. More specifically, we show that performing one iteration of steepest descent is equivalent to solving a Linear Quadratic Regulator (LQR). This perspective gives us a convenient and unified framework for deploying a wide range of steepest descent algorithms, such as gradient descent and natural gradient descent, but certainly not limited to. This framework can also be extended to problems with an infinite horizon, such as deep equilibrium models. Doing so reveals that retrieving the gradient via implicit differentiation is equivalent to recovering it via Riccati’s solution to the LQR associated with gradient descent. Finally, incorporating curvature information into steepest descent usually takes the form of a matrix inversion. However, casting a steepest descent
step as a LQR also hints toward a trick that allows to sidestep this inversion, by leveraging Neumann’s series approximation. Empirical observations provide evidence that this approximation actually helps to stabilize the training process, by acting as an adaptive damping parameter
- …