4,573 research outputs found
Model Reduction and Neural Networks for Parametric PDEs
We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature
Analysis Of Momentum Methods
Gradient decent-based optimization methods underpin the parameter training which results in the impressive results now found when testing neural networks. Introducing stochasticity is key to their success in practical problems, and there is some understanding of the role of stochastic gradient decent in this context. Momentum modifications of gradient decent such as Polyak's Heavy Ball method (HB) and Nesterov's method of accelerated gradients (NAG), are widely adopted. In this work, our focus is on understanding the role of momentum in the training of neural networks, concentrating on the common situation in which the momentum contribution is fixed at each step of the algorithm; to expose the ideas simply we work in the deterministic setting. We show that, contrary to popular belief, standard implementations of fixed momentum methods do no more than act to rescale the learning rate. We achieve this by showing that the momentum method converges to a gradient flow, with a momentum-dependent time-rescaling, using the method of modified equations from numerical analysis. Further we show that the momentum method admits an exponentially attractive invariant manifold on which the dynamic reduces to a gradient flow with respect to a modified loss function, equal to the original one plus a small perturbation
Ensemble Kalman Inversion: A Derivative-Free Technique For Machine Learning Tasks
The standard probabilistic perspective on machine learning gives rise to
empirical risk-minimization tasks that are frequently solved by stochastic
gradient descent (SGD) and variants thereof. We present a formulation of these
tasks as classical inverse or filtering problems and, furthermore, we propose
an efficient, gradient-free algorithm for finding a solution to these problems
using ensemble Kalman inversion (EKI). Applications of our approach include
offline and online supervised learning with deep neural networks, as well as
graph-based semi-supervised learning. The essence of the EKI procedure is an
ensemble based approximate gradient descent in which derivatives are replaced
by differences from within the ensemble. We suggest several modifications to
the basic method, derived from empirically successful heuristics developed in
the context of SGD. Numerical results demonstrate wide applicability and
robustness of the proposed algorithm.Comment: 41 pages, 14 figure
Model Reduction and Neural Networks for Parametric PDEs
We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature
Recommended from our members
The magic carpet: an arbitrary spectrum wave maker for internal waves
Abstract: We present a novel apparatus for generating internal waves of arbitrary size and shape, including both phase-locked and propagating waves. It is an actively driven, flexible “magic carpet” in the base of a tank. Our wave maker is computer-controlled to enable easy configuration. The actuation of a smooth, flexible surface produces clean waveforms with a predictable spectrum, for which we derive a theoretical model. We demonstrate the versatility of our wave maker through an experimental study of linear and nonlinear, isolated, and combined internal waves, including some that are sufficiently nonlinear to break remote from their source. Graphic abstract
- …