12 research outputs found
Smolyak's algorithm: A powerful black box for the acceleration of scientific computations
We provide a general discussion of Smolyak's algorithm for the acceleration
of scientific computations. The algorithm first appeared in Smolyak's work on
multidimensional integration and interpolation. Since then, it has been
generalized in multiple directions and has been associated with the keywords:
sparse grids, hyperbolic cross approximation, combination technique, and
multilevel methods. Variants of Smolyak's algorithm have been employed in the
computation of high-dimensional integrals in finance, chemistry, and physics,
in the numerical solution of partial and stochastic differential equations, and
in uncertainty quantification. Motivated by this broad and ever-increasing
range of applications, we describe a general framework that summarizes
fundamental results and assumptions in a concise application-independent
manner
Multi-index Stochastic Collocation convergence rates for random PDEs with parametric regularity
We analyze the recent Multi-index Stochastic Collocation (MISC) method for
computing statistics of the solution of a partial differential equation (PDEs)
with random data, where the random coefficient is parametrized by means of a
countable sequence of terms in a suitable expansion. MISC is a combination
technique based on mixed differences of spatial approximations and quadratures
over the space of random data and, naturally, the error analysis uses the joint
regularity of the solution with respect to both the variables in the physical
domain and parametric variables. In MISC, the number of problem solutions
performed at each discretization level is not determined by balancing the
spatial and stochastic components of the error, but rather by suitably
extending the knapsack-problem approach employed in the construction of the
quasi-optimal sparse-grids and Multi-index Monte Carlo methods. We use a greedy
optimization procedure to select the most effective mixed differences to
include in the MISC estimator. We apply our theoretical estimates to a linear
elliptic PDEs in which the log-diffusion coefficient is modeled as a random
field, with a covariance similar to a Mat\'ern model, whose realizations have
spatial regularity determined by a scalar parameter. We conduct a complexity
analysis based on a summability argument showing algebraic rates of convergence
with respect to the overall computational work. The rate of convergence depends
on the smoothness parameter, the physical dimensionality and the efficiency of
the linear solver. Numerical experiments show the effectiveness of MISC in this
infinite-dimensional setting compared with the Multi-index Monte Carlo method
and compare the convergence rate against the rates predicted in our theoretical
analysis
A Metalearning Approach for Physics-Informed Neural Networks (PINNs): Application to Parameterized PDEs
Physics-informed neural networks (PINNs) as a means of discretizing partial
differential equations (PDEs) are garnering much attention in the Computational
Science and Engineering (CS&E) world. At least two challenges exist for PINNs
at present: an understanding of accuracy and convergence characteristics with
respect to tunable parameters and identification of optimization strategies
that make PINNs as efficient as other computational science tools. The cost of
PINNs training remains a major challenge of Physics-informed Machine Learning
(PiML) - and, in fact, machine learning (ML) in general. This paper is meant to
move towards addressing the latter through the study of PINNs on new tasks, for
which parameterized PDEs provides a good testbed application as tasks can be
easily defined in this context. Following the ML world, we introduce
metalearning of PINNs with application to parameterized PDEs. By introducing
metalearning and transfer learning concepts, we can greatly accelerate the
PINNs optimization process. We present a survey of model-agnostic metalearning,
and then discuss our model-aware metalearning applied to PINNs as well as
implementation considerations and algorithmic complexity. We then test our
approach on various canonical forward parameterized PDEs that have been
presented in the emerging PINNs literature
Error analysis of regularized and unregularized least-squares regression on discretized function spaces
In this thesis, we analyze a variant of the least-squares regression method which operates on subsets of finite-dimensional vector spaces. In the first part, we focus on a regression problem which is constrained to a ball of finite radius in the search space. We derive an upper bound on the overall error by coupling the ball radius to the resolution of the search space. In the second part, the corresponding penalized Lagrangian dual problem is considered to establish probabilistic results on the well-posedness of the underlying minimization problem. Furthermore, we have a look at the limit case, where the penalty term vanishes and we improve on our error estimates from the first part for the special case of noiseless function reconstruction. Subsequently, our theoretical foundation is used to obtain novel convergence results for regression algorithms based on sparse grids with linear splines and Fourier polynomial spaces on hyperbolic crosses. We conclude the thesis by giving several numerical examples and comparing the observed error behavior to our theoretical results