We present a general kernel-based framework for learning operators between
Banach spaces along with a priori error analysis and comprehensive numerical
comparisons with popular neural net (NN) approaches such as Deep Operator Net
(DeepONet) [Lu et al.] and Fourier Neural Operator (FNO) [Li et al.]. We
consider the setting where the input/output spaces of target operator
Gβ :UβV are reproducing kernel
Hilbert spaces (RKHS), the data comes in the form of partial observations
Ο(uiβ),Ο(viβ) of input/output functions
viβ=Gβ (uiβ) (i=1,β¦,N), and the measurement operators
Ο:UβRn and Ο:VβRm are linear. Writing Ο:RnβU and
Ο:RmβV for the optimal recovery maps
associated with Ο and Ο, we approximate Gβ with
GΛβ=ΟβfΛββΟ where fΛβ is an optimal
recovery approximation of fβ :=ΟβGβ βΟ:RnβRm. We show that, even when using vanilla
kernels (e.g., linear or Mat\'{e}rn), our approach is competitive in terms of
cost-accuracy trade-off and either matches or beats the performance of NN
methods on a majority of benchmarks. Additionally, our framework offers several
advantages inherited from kernel methods: simplicity, interpretability,
convergence guarantees, a priori error estimates, and Bayesian uncertainty
quantification. As such, it can serve as a natural benchmark for operator
learning.Comment: 35 pages, 10 figure