2,319 research outputs found

    Domain Decomposition preconditioning for high-frequency Helmholtz problems with absorption

    Get PDF
    In this paper we give new results on domain decomposition preconditioners for GMRES when computing piecewise-linear finite-element approximations of the Helmholtz equation −Δu−(k2+iε)u=f-\Delta u - (k^2+ {\rm i} \varepsilon)u = f, with absorption parameter ε∈R\varepsilon \in \mathbb{R}. Multigrid approximations of this equation with ε≠0\varepsilon \not= 0 are commonly used as preconditioners for the pure Helmholtz case (ε=0\varepsilon = 0). However a rigorous theory for such (so-called "shifted Laplace") preconditioners, either for the pure Helmholtz equation, or even the absorptive equation (ε≠0\varepsilon \not=0), is still missing. We present a new theory for the absorptive equation that provides rates of convergence for (left- or right-) preconditioned GMRES, via estimates of the norm and field of values of the preconditioned matrix. This theory uses a kk- and ε\varepsilon-explicit coercivity result for the underlying sesquilinear form and shows, for example, that if ∣ε∣∼k2|\varepsilon|\sim k^2, then classical overlapping additive Schwarz will perform optimally for the absorptive problem, provided the subdomain and coarse mesh diameters are carefully chosen. Extensive numerical experiments are given that support the theoretical results. The theory for the absorptive case gives insight into how its domain decomposition approximations perform as preconditioners for the pure Helmholtz case ε=0\varepsilon = 0. At the end of the paper we propose a (scalable) multilevel preconditioner for the pure Helmholtz problem that has an empirical computation time complexity of about O(n4/3)\mathcal{O}(n^{4/3}) for solving finite element systems of size n=O(k3)n=\mathcal{O}(k^3), where we have chosen the mesh diameter h∼k−3/2h \sim k^{-3/2} to avoid the pollution effect. Experiments on problems with h∼k−1h\sim k^{-1}, i.e. a fixed number of grid points per wavelength, are also given

    The M\"obius Domain Wall Fermion Algorithm

    Full text link
    We present a review of the properties of generalized domain wall Fermions, based on a (real) M\"obius transformation on the Wilson overlap kernel, discussing their algorithmic efficiency, the degree of explicit chiral violations measured by the residual mass (mresm_{res}) and the Ward-Takahashi identities. The M\"obius class interpolates between Shamir's domain wall operator and Bori\c{c}i's domain wall implementation of Neuberger's overlap operator without increasing the number of Dirac applications per conjugate gradient iteration. A new scaling parameter (α\alpha) reduces chiral violations at finite fifth dimension (LsL_s) but yields exactly the same overlap action in the limit Ls→∞L_s \rightarrow \infty. Through the use of 4d Red/Black preconditioning and optimal tuning for the scaling α(Ls)\alpha(L_s), we show that chiral symmetry violations are typically reduced by an order of magnitude at fixed LsL_s. At large LsL_s we argue that the observed scaling for mres=O(1/Ls)m_{res} = O(1/L_s) for Shamir is replaced by mres=O(1/Ls2)m_{res} = O(1/L_s^2) for the properly tuned M\"obius algorithm with α=O(Ls)\alpha = O(L_s)Comment: 59 pages, 11 figure

    Variational Data Assimilation via Sparse Regularization

    Get PDF
    This paper studies the role of sparse regularization in a properly chosen basis for variational data assimilation (VDA) problems. Specifically, it focuses on data assimilation of noisy and down-sampled observations while the state variable of interest exhibits sparsity in the real or transformed domain. We show that in the presence of sparsity, the â„“1\ell_{1}-norm regularization produces more accurate and stable solutions than the classic data assimilation methods. To motivate further developments of the proposed methodology, assimilation experiments are conducted in the wavelet and spectral domain using the linear advection-diffusion equation

    Preconditioning Kernel Matrices

    Full text link
    The computational and storage complexity of kernel machines presents the primary barrier to their scaling to large, modern, datasets. A common way to tackle the scalability issue is to use the conjugate gradient algorithm, which relieves the constraints on both storage (the kernel matrix need not be stored) and computation (both stochastic gradients and parallelization can be used). Even so, conjugate gradient is not without its own issues: the conditioning of kernel matrices is often such that conjugate gradients will have poor convergence in practice. Preconditioning is a common approach to alleviating this issue. Here we propose preconditioned conjugate gradients for kernel machines, and develop a broad range of preconditioners particularly useful for kernel matrices. We describe a scalable approach to both solving kernel machines and learning their hyperparameters. We show this approach is exact in the limit of iterations and outperforms state-of-the-art approximations for a given computational budget
    • …
    corecore