12 research outputs found
Approximation of Rough Functions
For given and , we establish
the existence and uniqueness of solutions , to the
equation where , , and . Solutions include well-known nowhere differentiable functions such as
those of Bolzano, Weierstrass, Hardy, and many others. Connections and
consequences in the theory of fractal interpolation, approximation theory, and
Fourier analysis are established.Comment: 16 pages, 3 figure
Approximation of rough functions
For given p β [1,β] and g β L p(R), we establish the existence and uniqueness of solutions f β L p(R), to the equation f (x) β a f (bx) = g(x), where a β R, b β R\ {0}, and |a| ΜΈ= |b|1/p. Solutions include well-known nowhere differentiable functions such as those of Bolzano,Weierstrass, Hardy, and many others. Connections and consequences in the theory of fractal interpolation, approximation theory, and Fourier analysis are established.M. F. Barnsley, B. Harding, A. Vince, P. Viswanatha
GPT-PINN: Generative Pre-Trained Physics-Informed Neural Networks toward non-intrusive Meta-learning of parametric PDEs
Physics-Informed Neural Network (PINN) has proven itself a powerful tool to
obtain the numerical solutions of nonlinear partial differential equations
(PDEs) leveraging the expressivity of deep neural networks and the computing
power of modern heterogeneous hardware. However, its training is still
time-consuming, especially in the multi-query and real-time simulation
settings, and its parameterization often overly excessive. In this paper, we
propose the Generative Pre-Trained PINN (GPT-PINN) to mitigate both challenges
in the setting of parametric PDEs. GPT-PINN represents a brand-new
meta-learning paradigm for parametric systems. As a network of networks, its
outer-/meta-network is hyper-reduced with only one hidden layer having
significantly reduced number of neurons. Moreover, its activation function at
each hidden neuron is a (full) PINN pre-trained at a judiciously selected
system configuration. The meta-network adaptively ``learns'' the parametric
dependence of the system and ``grows'' this hidden layer one neuron at a time.
In the end, by encompassing a very small number of networks trained at this set
of adaptively-selected parameter values, the meta-network is capable of
generating surrogate solutions for the parametric system across the entire
parameter domain accurately and efficiently
Error analysis for deep neural network approximations of parametric hyperbolic conservation laws
We derive rigorous bounds on the error resulting from the approximation of
the solution of parametric hyperbolic scalar conservation laws with ReLU neural
networks. We show that the approximation error can be made as small as desired
with ReLU neural networks that overcome the curse of dimensionality. In
addition, we provide an explicit upper bound on the generalization error in
terms of the training error, number of training samples and the neural network
size. The theoretical results are illustrated by numerical experiments
A Multi-level procedure for enhancing accuracy of machine learning algorithms
We propose a multi-level method to increase the accuracy of machine learning
algorithms for approximating observables in scientific computing, particularly
those that arise in systems modeled by differential equations. The algorithm
relies on judiciously combining a large number of computationally cheap
training data on coarse resolutions with a few expensive training samples on
fine grid resolutions. Theoretical arguments for lowering the generalization
error, based on reducing the variance of the underlying maps, are provided and
numerical evidence, indicating significant gains over underlying single-level
machine learning algorithms, are presented. Moreover, we also apply the
multi-level algorithm in the context of forward uncertainty quantification and
observe a considerable speed-up over competing algorithms
Fault Tolerant Computation of Hyperbolic Partial Differential Equations with the Sparse Grid Combination Technique
As the computing power of supercomputers continues to increase
exponentially the mean time between failures (MTBF) is
decreasing. Checkpoint-restart has historically been the method
of choice for recovering from failures. However, such methods
become increasingly inefficient as the time required to complete
a checkpoint-restart cycle approaches the MTBF. There is
therefore a need to explore different ways of making computations
fault tolerant. This thesis studies generalisations of the sparse
grid combination technique with the goal of developing and
analysing a holistic approach to the fault tolerant computation
of partial differential equations (PDEs). Sparse grids allow one
to reduce the computational complexity of high dimensional
problems with only small loss of accuracy. A drawback is the need
to perform computations with a hierarchical basis rather than a
traditional nodal basis. We survey classical error estimates for
sparse grid interpolation and extend results to functions which
are non-zero on the boundary. The combination technique
approximates sparse grid solutions via a sum of many coarse
approximations which need not be computed with a hierarchical
basis. Study of the combination technique often assumes that
approximations satisfy an error splitting formula. We adapt
classical error splitting results to our slightly different
convention of combination level.
Literature on the application of the combination technique to
hyperbolic PDEs is scarce, particularly when solved with explicit
finite difference methods. We show a particular family of finite
difference discretisations for the advection equation solved via
the method of lines has solutions which satisfy an error
splitting formula. As a consequence, classical error splitting
based estimates are readily applied to finite difference
solutions of many hyperbolic PDEs. Our analysis also reveals how
repeated combinations throughout the computation leads to a
reduction in approximation error.
Generalisations of the combination technique are studied and
developed at depth. The truncated combination technique is a
modification of the classical method used in practical
applications and we provide analogues of classical error
estimates. Adaptive sparse grids are then studied via a lattice
framework. A detailed examination reveals many results regarding
combination coefficients and extensions of classical error
estimates. The framework is also applied to the study of
extrapolation formula. These extensions of the combination
technique provide the foundations for the development of the
general coefficient problem. Solutions to this problem allow one
to combine any collection of coarse approximations on nested
grids. Lastly, we show how the combination technique is made
fault tolerant via application of the general coefficient
problem. Rather than recompute coarse solutions which fail we
instead find new coefficients to combine remaining solutions.
This significantly reduces computational overheads in the
presence of faults with only small loss of accuracy. The latter
is established with a careful study of the expected error for
some select cases. We perform numerical experiments by computing
combination solutions of the scalar advection equation in a
parallel environment with simulated faults. The results support
the preceding analysis and show that the overheads are indeed
small and a significant improvement over traditional
checkpoint-restart methods
On the approximation of rough functions with deep neural networks
The essentially non-oscillatory (ENO) procedure and its variant, the ENO-SR procedure, are very efficient algorithms for interpolating (reconstructing) rough functions. We prove that the ENO (and ENO-SR) procedure are equivalent to deep ReLU neural networks. This demonstrates the ability of deep ReLU neural networks to approximate rough functions to high-order of accuracy. Numerical tests for the resulting trained neural networks show excellent performance for interpolating functions, approximating solutions of nonlinear conservation laws and at data compression
On the Approximation of Rough Functions with Artificial Neural Networks
Deep neural networks and the ENO procedure are both efficient frameworks for approximating rough functions. We prove that at any order, the stencil shifts of the ENO and ENO-SR interpolation procedures can be exactly obtained using a deep ReLU neural network. In addition, we construct and provide error bounds for ReLU neural networks that directly approximate the output of the ENO and ENO- SR interpolation procedures. This surprising fact enables the transfer of several desirable properties of the ENO procedure to deep neural networks, including its high-order accuracy at approximating Lipschitz functions. Numerical tests for the resulting neural networks show excellent performance for interpolating rough functions, data compression and approximating solutions of nonlinear conservation laws