436 research outputs found

    A mixed 1\ell_1 regularization approach for sparse simultaneous approximation of parameterized PDEs

    Full text link
    We present and analyze a novel sparse polynomial technique for the simultaneous approximation of parameterized partial differential equations (PDEs) with deterministic and stochastic inputs. Our approach treats the numerical solution as a jointly sparse reconstruction problem through the reformulation of the standard basis pursuit denoising, where the set of jointly sparse vectors is infinite. To achieve global reconstruction of sparse solutions to parameterized elliptic PDEs over both physical and parametric domains, we combine the standard measurement scheme developed for compressed sensing in the context of bounded orthonormal systems with a novel mixed-norm based 1\ell_1 regularization method that exploits both energy and sparsity. In addition, we are able to prove that, with minimal sample complexity, error estimates comparable to the best ss-term and quasi-optimal approximations are achievable, while requiring only a priori bounds on polynomial truncation error with respect to the energy norm. Finally, we perform extensive numerical experiments on several high-dimensional parameterized elliptic PDE models to demonstrate the superior recovery properties of the proposed approach.Comment: 23 pages, 4 figure

    Analytic Regularity and GPC Approximation for Control Problems Constrained by Linear Parametric Elliptic and Parabolic PDEs

    Get PDF
    This paper deals with linear-quadratic optimal control problems constrained by a parametric or stochastic elliptic or parabolic PDE. We address the (difficult) case that the state equation depends on a countable number of parameters i.e., on σj\sigma_j with jNj\in\N, and that the PDE operator may depend non-affinely on the parameters. We consider tracking-type functionals and distributed as well as boundary controls. Building on recent results in [CDS1, CDS2], we show that the state and the control are analytic as functions depending on these parameters σj\sigma_j. We establish sparsity of generalized polynomial chaos (gpc) expansions of both, state and control, in terms of the stochastic coordinate sequence σ=(σj)j1\sigma = (\sigma_j)_{j\ge 1} of the random inputs, and prove convergence rates of best NN-term truncations of these expansions. Such truncations are the key for subsequent computations since they do {\em not} assume that the stochastic input data has a finite expansion. In the follow-up paper [KS2], we explain two methods how such best NN-term truncations can practically be computed, by greedy-type algorithms as in [SG, Gi1], or by multilevel Monte-Carlo methods as in [KSS]. The sparsity result allows in conjunction with adaptive wavelet Galerkin schemes for sparse, adaptive tensor discretizations of control problems constrained by linear elliptic and parabolic PDEs developed in [DK, GK, K], see [KS2]

    Kernel Methods are Competitive for Operator Learning

    Full text link
    We present a general kernel-based framework for learning operators between Banach spaces along with a priori error analysis and comprehensive numerical comparisons with popular neural net (NN) approaches such as Deep Operator Net (DeepONet) [Lu et al.] and Fourier Neural Operator (FNO) [Li et al.]. We consider the setting where the input/output spaces of target operator G:UV\mathcal{G}^\dagger\,:\, \mathcal{U}\to \mathcal{V} are reproducing kernel Hilbert spaces (RKHS), the data comes in the form of partial observations ϕ(ui),φ(vi)\phi(u_i), \varphi(v_i) of input/output functions vi=G(ui)v_i=\mathcal{G}^\dagger(u_i) (i=1,,Ni=1,\ldots,N), and the measurement operators ϕ:URn\phi\,:\, \mathcal{U}\to \mathbb{R}^n and φ:VRm\varphi\,:\, \mathcal{V} \to \mathbb{R}^m are linear. Writing ψ:RnU\psi\,:\, \mathbb{R}^n \to \mathcal{U} and χ:RmV\chi\,:\, \mathbb{R}^m \to \mathcal{V} for the optimal recovery maps associated with ϕ\phi and φ\varphi, we approximate G\mathcal{G}^\dagger with Gˉ=χfˉϕ\bar{\mathcal{G}}=\chi \circ \bar{f} \circ \phi where fˉ\bar{f} is an optimal recovery approximation of f:=φGψ:RnRmf^\dagger:=\varphi \circ \mathcal{G}^\dagger \circ \psi\,:\,\mathbb{R}^n \to \mathbb{R}^m. We show that, even when using vanilla kernels (e.g., linear or Mat\'{e}rn), our approach is competitive in terms of cost-accuracy trade-off and either matches or beats the performance of NN methods on a majority of benchmarks. Additionally, our framework offers several advantages inherited from kernel methods: simplicity, interpretability, convergence guarantees, a priori error estimates, and Bayesian uncertainty quantification. As such, it can serve as a natural benchmark for operator learning.Comment: 35 pages, 10 figure

    Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations

    Full text link
    We analyze the convergence of compressive sensing based sampling techniques for the efficient evaluation of functionals of solutions for a class of high-dimensional, affine-parametric, linear operator equations which depend on possibly infinitely many parameters. The proposed algorithms are based on so-called "non-intrusive" sampling of the high-dimensional parameter space, reminiscent of Monte-Carlo sampling. In contrast to Monte-Carlo, however, a functional of the parametric solution is then computed via compressive sensing methods from samples of functionals of the solution. A key ingredient in our analysis of independent interest consists in a generalization of recent results on the approximate sparsity of generalized polynomial chaos representations (gpc) of the parametric solution families, in terms of the gpc series with respect to tensorized Chebyshev polynomials. In particular, we establish sufficient conditions on the parametric inputs to the parametric operator equation such that the Chebyshev coefficients of the gpc expansion are contained in certain weighted p\ell_p-spaces for 0<p10<p\leq 1. Based on this we show that reconstructions of the parametric solutions computed from the sampled problems converge, with high probability, at the L2L_2, resp. LL_\infty convergence rates afforded by best ss-term approximations of the parametric solution up to logarithmic factors.Comment: revised version, 27 page

    Operator compression with deep neural networks

    Get PDF
    corecore