20 research outputs found
Recommended from our members
Mini-Workshop: Adaptive Methods for Control Problems Constrained by Time-Dependent PDEs
Optimization problems constrained by time-dependent PDEs (Partial Differential Equations) are challenging from a computational point of view: even in the simplest case, one needs to solve a system of PDEs coupled globally in time and space for the unknown solutions (the state, the costate and the control of the system). Typical and practically relevant examples are the control of nonlinear heat equations as they appear in laser hardening or the thermic control of flow problems (Boussinesq equations). Specifically for PDEs with a long time horizon, conventional time-stepping methods require an enormous storage of the respective other variables. In contrast, adaptive methods aim at distributing the available degrees of freedom in an a-posteriori-fashion to capture singularities and are, therefore, most promising
Learning Coarse Propagators in Parareal Algorithm
The parareal algorithm represents an important class of parallel-in-time
algorithms for solving evolution equations and has been widely applied in
practice. To achieve effective speedup, the choice of the coarse propagator in
the algorithm is vital. In this work, we investigate the use of learned coarse
propagators. Building upon the error estimation framework, we present a
systematic procedure for constructing coarse propagators that enjoy desirable
stability and consistent order. Additionally, we provide preliminary
mathematical guarantees for the resulting parareal algorithm. Numerical
experiments on a variety of settings, e.g., linear diffusion model, Allen-Cahn
model, and viscous Burgers model, show that learning can significantly improve
parallel efficiency when compared with the more ad hoc choice of some
conventional and widely used coarse propagators.Comment: 24 page
Recommended from our members
Geophysical Fluid Dynamics
The workshop “Geophysical Fluid Dynamics” addressed recent advances in analytical, stochastic, modeling and computational studies of geophysical fluid models. Of central interest were the reduced geophysical models, that are derived by means of asymptotic and scaling techniques, and their investigations by methods from the above disciplines. In particular, contributions concerning the viscous and inviscid geostrophic models, the primitive equations of oceanic and atmospheric dynamics, tropical atmospheric models and their coupling to nonlinear dynamics of phase changes moisture, thermodynamical effects, stratifying effects, as well as boundary layers were presented and discussed
Recommended from our members
Geometric Numerical Integration
The subject of this workshop was numerical methods that preserve geometric properties of the flow of an ordinary or partial differential equation. This was complemented by the question as to how structure preservation affects the long-time behaviour of numerical methods
Parallel-in-space-time, adaptive finite element framework for non-linear parabolic equations
We present an adaptive methodology for the solution of (linear and) non-linear time dependent problems that is especially tailored for massively parallel computations. The basic concept is to solve for large blocks of space-time unknowns instead of marching sequentially in time. The methodology is a combination of a computationally efficient implementation of a parallel-in-space time finite element solver coupled with a posteriori space-time error estimates and a parallel mesh generator. While we focus on spatial adaptivity in this work, the methodology enables simultaneous adaptivity in both space and time domains. We explore this basic concept in the context of a variety of time-steppers including Θ-schemes and Backward Difference Formulas. We specifically illustrate this framework with applications involving time dependent linear, quasi-linear and semi-linear diffusion equations. We focus on investigating how the coupled space-time refinement indicators for this class of problems aspect spatial adaptivity. Finally, we show good scaling behavior up to 150,000 processors on the NCSA Blue Waters machine. This conceptually simple methodology enables scaling on next generation multi-core machines by simultaneously solving for large number of time-steps, and reduces computational overhead by locally refining spatial blocks that can track localized features. This methodology also opens up the possibility of efficiently incorporating adjoint equations for error estimators and invers
Recommended from our members
Nonlinear Evolution Equations: Analysis and Numerics
The qualitative theory of nonlinear evolution equations is an
important tool for studying the dynamical behavior of systems in
science and technology. A thorough understanding of the complex
behavior of such systems requires detailed analytical and numerical
investigations of the underlying partial differential equations
Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory
This book aims to provide an introduction to the topic of deep learning
algorithms. We review essential components of deep learning algorithms in full
mathematical detail including different artificial neural network (ANN)
architectures (such as fully-connected feedforward ANNs, convolutional ANNs,
recurrent ANNs, residual ANNs, and ANNs with batch normalization) and different
optimization algorithms (such as the basic stochastic gradient descent (SGD)
method, accelerated methods, and adaptive methods). We also cover several
theoretical aspects of deep learning algorithms such as approximation
capacities of ANNs (including a calculus for ANNs), optimization theory
(including Kurdyka-{\L}ojasiewicz inequalities), and generalization errors. In
the last part of the book some deep learning approximation methods for PDEs are
reviewed including physics-informed neural networks (PINNs) and deep Galerkin
methods. We hope that this book will be useful for students and scientists who
do not yet have any background in deep learning at all and would like to gain a
solid foundation as well as for practitioners who would like to obtain a firmer
mathematical understanding of the objects and methods considered in deep
learning.Comment: 601 pages, 36 figures, 45 source code