436 research outputs found
A mixed regularization approach for sparse simultaneous approximation of parameterized PDEs
We present and analyze a novel sparse polynomial technique for the
simultaneous approximation of parameterized partial differential equations
(PDEs) with deterministic and stochastic inputs. Our approach treats the
numerical solution as a jointly sparse reconstruction problem through the
reformulation of the standard basis pursuit denoising, where the set of jointly
sparse vectors is infinite. To achieve global reconstruction of sparse
solutions to parameterized elliptic PDEs over both physical and parametric
domains, we combine the standard measurement scheme developed for compressed
sensing in the context of bounded orthonormal systems with a novel mixed-norm
based regularization method that exploits both energy and sparsity. In
addition, we are able to prove that, with minimal sample complexity, error
estimates comparable to the best -term and quasi-optimal approximations are
achievable, while requiring only a priori bounds on polynomial truncation error
with respect to the energy norm. Finally, we perform extensive numerical
experiments on several high-dimensional parameterized elliptic PDE models to
demonstrate the superior recovery properties of the proposed approach.Comment: 23 pages, 4 figure
Analytic Regularity and GPC Approximation for Control Problems Constrained by Linear Parametric Elliptic and Parabolic PDEs
This paper deals with linear-quadratic optimal control problems constrained by a parametric or stochastic elliptic or parabolic PDE. We address the (difficult) case that the state equation depends on a countable number of parameters i.e., on with , and that the PDE operator may depend non-affinely on the parameters. We consider tracking-type functionals and distributed as well as boundary controls. Building on recent results in [CDS1, CDS2], we show that the state and the control are analytic as functions depending on these parameters . We
establish sparsity of generalized polynomial chaos (gpc) expansions of both, state and control, in terms of the stochastic coordinate sequence of the random inputs, and prove convergence rates of best -term truncations of these expansions. Such truncations are the key for subsequent computations since they do {\em not} assume that the stochastic input data has a finite expansion. In the follow-up paper [KS2], we explain two methods how such best -term truncations can practically be computed, by greedy-type algorithms
as in [SG, Gi1], or by multilevel Monte-Carlo methods as in
[KSS]. The sparsity result allows in conjunction with adaptive wavelet Galerkin schemes for sparse, adaptive tensor discretizations of control problems constrained by linear elliptic and parabolic PDEs developed in [DK, GK, K], see [KS2]
Kernel Methods are Competitive for Operator Learning
We present a general kernel-based framework for learning operators between
Banach spaces along with a priori error analysis and comprehensive numerical
comparisons with popular neural net (NN) approaches such as Deep Operator Net
(DeepONet) [Lu et al.] and Fourier Neural Operator (FNO) [Li et al.]. We
consider the setting where the input/output spaces of target operator
are reproducing kernel
Hilbert spaces (RKHS), the data comes in the form of partial observations
of input/output functions
(), and the measurement operators
and are linear. Writing and
for the optimal recovery maps
associated with and , we approximate with
where is an optimal
recovery approximation of . We show that, even when using vanilla
kernels (e.g., linear or Mat\'{e}rn), our approach is competitive in terms of
cost-accuracy trade-off and either matches or beats the performance of NN
methods on a majority of benchmarks. Additionally, our framework offers several
advantages inherited from kernel methods: simplicity, interpretability,
convergence guarantees, a priori error estimates, and Bayesian uncertainty
quantification. As such, it can serve as a natural benchmark for operator
learning.Comment: 35 pages, 10 figure
Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations
We analyze the convergence of compressive sensing based sampling techniques
for the efficient evaluation of functionals of solutions for a class of
high-dimensional, affine-parametric, linear operator equations which depend on
possibly infinitely many parameters. The proposed algorithms are based on
so-called "non-intrusive" sampling of the high-dimensional parameter space,
reminiscent of Monte-Carlo sampling. In contrast to Monte-Carlo, however, a
functional of the parametric solution is then computed via compressive sensing
methods from samples of functionals of the solution. A key ingredient in our
analysis of independent interest consists in a generalization of recent results
on the approximate sparsity of generalized polynomial chaos representations
(gpc) of the parametric solution families, in terms of the gpc series with
respect to tensorized Chebyshev polynomials. In particular, we establish
sufficient conditions on the parametric inputs to the parametric operator
equation such that the Chebyshev coefficients of the gpc expansion are
contained in certain weighted -spaces for . Based on this we
show that reconstructions of the parametric solutions computed from the sampled
problems converge, with high probability, at the , resp.
convergence rates afforded by best -term approximations of the parametric
solution up to logarithmic factors.Comment: revised version, 27 page
Recommended from our members
Multiscale and High-Dimensional Problems
High-dimensional problems appear naturally in various scientific areas. Two primary examples are PDEs describing complex processes in computational chemistry and physics, and stochastic/ parameter-dependent PDEs arising in uncertainty quantification and optimal control. Other highly visible examples are big data analysis including regression and classification which typically encounters high-dimensional data as input and/or output. High dimensional problems cannot be solved by traditional numerical techniques, because of the so-called curse of dimensionality. Rather, they require the development of novel theoretical and computational approaches to make them tractable and to capture fine resolutions and relevant features. Paradoxically, increasing computational power may even serve to heighten this demand, since the wealth of new computational data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information in a high dimensional setting constitute challenging tasks from both theoretical and numerical perspective.
The last decade has seen the emergence of several new computational methodologies which address the obstacles to solving high dimensional problems. These include adaptive methods based on mesh refinement or sparsity, random forests, model reduction, compressed sensing, sparse grid and hyperbolic wavelet approximations, and various new tensor structures. Their common features are the nonlinearity of the solution method that prioritize variables and separate solution characteristics living on different scales. These methods have already drastically advanced the frontiers of computability for certain problem classes.
This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computational methods and to promote the exchange of ideas emerging in various disciplines about how to treat multiscale and high-dimensional problems
- …