432 research outputs found
Multiscale and High-Dimensional Problems
High-dimensional problems appear naturally in various scientific areas. Two primary examples are PDEs describing complex processes in computational chemistry and physics, and stochastic/ parameter-dependent PDEs arising in uncertainty quantification and optimal control. Other highly visible examples are big data analysis including regression and classification which typically encounters high-dimensional data as input and/or output. High dimensional problems cannot be solved by traditional numerical techniques, because of the so-called curse of dimensionality. Rather, they require the development of novel theoretical and computational approaches to make them tractable and to capture fine resolutions and relevant features. Paradoxically, increasing computational power may even serve to heighten this demand, since the wealth of new computational data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information in a high dimensional setting constitute challenging tasks from both theoretical and numerical perspective.
The last decade has seen the emergence of several new computational methodologies which address the obstacles to solving high dimensional problems. These include adaptive methods based on mesh refinement or sparsity, random forests, model reduction, compressed sensing, sparse grid and hyperbolic wavelet approximations, and various new tensor structures. Their common features are the nonlinearity of the solution method that prioritize variables and separate solution characteristics living on different scales. These methods have already drastically advanced the frontiers of computability for certain problem classes.
This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computational methods and to promote the exchange of ideas emerging in various disciplines about how to treat multiscale and high-dimensional problems
Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations
We analyze the convergence of compressive sensing based sampling techniques
for the efficient evaluation of functionals of solutions for a class of
high-dimensional, affine-parametric, linear operator equations which depend on
possibly infinitely many parameters. The proposed algorithms are based on
so-called "non-intrusive" sampling of the high-dimensional parameter space,
reminiscent of Monte-Carlo sampling. In contrast to Monte-Carlo, however, a
functional of the parametric solution is then computed via compressive sensing
methods from samples of functionals of the solution. A key ingredient in our
analysis of independent interest consists in a generalization of recent results
on the approximate sparsity of generalized polynomial chaos representations
(gpc) of the parametric solution families, in terms of the gpc series with
respect to tensorized Chebyshev polynomials. In particular, we establish
sufficient conditions on the parametric inputs to the parametric operator
equation such that the Chebyshev coefficients of the gpc expansion are
contained in certain weighted -spaces for . Based on this we
show that reconstructions of the parametric solutions computed from the sampled
problems converge, with high probability, at the , resp.
convergence rates afforded by best -term approximations of the parametric
solution up to logarithmic factors.Comment: revised version, 27 page
Computation and Learning in High Dimensions (hybrid meeting)
The most challenging problems in science often involve the learning and
accurate computation of high dimensional functions.
High-dimensionality is a typical feature for a multitude of problems
in various areas of science.
The so-called curse of dimensionality typically negates the use of
traditional numerical techniques for the solution of
high-dimensional problems. Instead, novel theoretical and
computational approaches need to be developed to make them tractable
and to capture fine resolutions and relevant features. Paradoxically,
increasing computational power may even serve to heighten this demand,
since the wealth of new computational data itself becomes a major
obstruction. Extracting essential information from complex
problem-inherent structures and developing rigorous models to quantify
the quality of information in a high-dimensional setting pose
challenging tasks from both theoretical and numerical perspective.
This has led to the emergence of several new computational methodologies,
accounting for the fact that by now well understood methods drawing on
spatial localization and mesh-refinement are in their original form no longer viable.
Common to these approaches is the nonlinearity of the solution method.
For certain problem classes, these methods have
drastically advanced the frontiers of computability.
The most visible of these new methods is deep learning. Although the use of deep neural
networks has been extremely successful in certain
application areas, their mathematical understanding is far from complete.
This workshop proposed to deepen the understanding of
the underlying mathematical concepts that drive this new evolution of
computational methods and to promote the exchange of ideas emerging in various
disciplines about how to treat multiscale and high-dimensional problems
Numerical Methods for PDE Constrained Optimization with Uncertain Data
Optimization problems governed by partial differential equations (PDEs) arise in many applications in the form of optimal control, optimal design, or parameter identification problems. In most applications, parameters in the governing PDEs are not deterministic, but rather have to be modeled as random variables or, more generally, as random fields. It is crucial to capture and quantify the uncertainty in such problems rather than to simply replace the uncertain coefficients with their mean values. However, treating the uncertainty adequately and in a computationally tractable manner poses many mathematical challenges. The numerical solution of optimization problems governed by stochastic PDEs builds on mathematical subareas, which so far have been largely investigated in separate communities: Stochastic Programming, Numerical Solution of Stochastic PDEs, and PDE Constrained Optimization.
The workshop achieved an impulse towards cross-fertilization of those disciplines which also was the subject of several scientific discussions. It is to be expected that future exchange of ideas between these areas will give rise to new insights and powerful new numerical methods
Multi-index Stochastic Collocation convergence rates for random PDEs with parametric regularity
We analyze the recent Multi-index Stochastic Collocation (MISC) method for
computing statistics of the solution of a partial differential equation (PDEs)
with random data, where the random coefficient is parametrized by means of a
countable sequence of terms in a suitable expansion. MISC is a combination
technique based on mixed differences of spatial approximations and quadratures
over the space of random data and, naturally, the error analysis uses the joint
regularity of the solution with respect to both the variables in the physical
domain and parametric variables. In MISC, the number of problem solutions
performed at each discretization level is not determined by balancing the
spatial and stochastic components of the error, but rather by suitably
extending the knapsack-problem approach employed in the construction of the
quasi-optimal sparse-grids and Multi-index Monte Carlo methods. We use a greedy
optimization procedure to select the most effective mixed differences to
include in the MISC estimator. We apply our theoretical estimates to a linear
elliptic PDEs in which the log-diffusion coefficient is modeled as a random
field, with a covariance similar to a Mat\'ern model, whose realizations have
spatial regularity determined by a scalar parameter. We conduct a complexity
analysis based on a summability argument showing algebraic rates of convergence
with respect to the overall computational work. The rate of convergence depends
on the smoothness parameter, the physical dimensionality and the efficiency of
the linear solver. Numerical experiments show the effectiveness of MISC in this
infinite-dimensional setting compared with the Multi-index Monte Carlo method
and compare the convergence rate against the rates predicted in our theoretical
analysis
Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks
The past decade has seen increasing interest in applying Deep Learning (DL)
to Computational Science and Engineering (CSE). Driven by impressive results in
applications such as computer vision, Uncertainty Quantification (UQ),
genetics, simulations and image processing, DL is increasingly supplanting
classical algorithms, and seems poised to revolutionize scientific computing.
However, DL is not yet well-understood from the standpoint of numerical
analysis. Little is known about the efficiency and reliability of DL from the
perspectives of stability, robustness, accuracy, and sample complexity. In
particular, approximating solutions to parametric PDEs is an objective of UQ
for CSE. Training data for such problems is often scarce and corrupted by
errors. Moreover, the target function is a possibly infinite-dimensional smooth
function taking values in the PDE solution space, generally an
infinite-dimensional Banach space. This paper provides arguments for Deep
Neural Network (DNN) approximation of such functions, with both known and
unknown parametric dependence, that overcome the curse of dimensionality. We
establish practical existence theorems that describe classes of DNNs with
dimension-independent architecture size and training procedures based on
minimizing the (regularized) -loss which achieve near-optimal algebraic
rates of convergence. These results involve key extensions of compressed
sensing for Banach-valued recovery and polynomial emulation with DNNs. When
approximating solutions of parametric PDEs, our results account for all sources
of error, i.e., sampling, optimization, approximation and physical
discretization, and allow for training high-fidelity DNN approximations from
coarse-grained sample data. Our theoretical results fall into the category of
non-intrusive methods, providing a theoretical alternative to classical methods
for high-dimensional approximation.Comment: 49 page
- …