41 research outputs found

    Bootstrap Multigrid for the Laplace-Beltrami Eigenvalue Problem

    Full text link
    This paper introduces bootstrap two-grid and multigrid finite element approximations to the Laplace-Beltrami (surface Laplacian) eigen-problem on a closed surface. The proposed multigrid method is suitable for recovering eigenvalues having large multiplicity, computing interior eigenvalues, and approximating the shifted indefinite eigen-problem. Convergence analysis is carried out for a simplified two-grid algorithm and numerical experiments are presented to illustrate the basic components and ideas behind the overall bootstrap multigrid approach

    A numerical domain decomposition method for solving elliptic equations on manifolds

    Full text link
    A new numerical domain decomposition method is proposed for solving elliptic equations on compact Riemannian manifolds. The advantage of this method is to avoid global triangulations or grids on manifolds. Our method is numerically tested on some 44-dimensional manifolds such as the unit sphere S4S^{4}, the complex projective space CP2\mathbb{CP}^{2} and the product manifold S2×S2S^{2} \times S^{2}.Comment: Final version. To appear in SIAM Journal on Scientific Computin

    Transformer Meets Boundary Value Inverse Problems

    Full text link
    A Transformer-based deep direct sampling method is proposed for a class of boundary value inverse problems. A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and the reconstructed images. An effort is made to give a specific example to a fundamental question: whether and how one can benefit from the theoretical structure of a mathematical problem to develop task-oriented and structure-conforming deep neural networks? Specifically, inspired by direct sampling methods for inverse problems, the 1D boundary data in different frequencies are preprocessed by a partial differential equation-based feature map to yield 2D harmonic extensions as different input channels. Then, by introducing learnable non-local kernels, the direct sampling is recast to a modified attention mechanism. The proposed method is then applied to electrical impedance tomography, a well-known severely ill-posed nonlinear inverse problem. The new method achieves superior accuracy over its predecessors and contemporary operator learners, as well as shows robustness with respect to noise. This research shall strengthen the insights that the attention mechanism, despite being invented for natural language processing tasks, offers great flexibility to be modified in conformity with the a priori mathematical knowledge, which ultimately leads to the design of more physics-compatible neural architectures
    corecore