2,319 research outputs found
Domain Decomposition preconditioning for high-frequency Helmholtz problems with absorption
In this paper we give new results on domain decomposition preconditioners for
GMRES when computing piecewise-linear finite-element approximations of the
Helmholtz equation , with
absorption parameter . Multigrid approximations of
this equation with are commonly used as preconditioners
for the pure Helmholtz case (). However a rigorous theory for
such (so-called "shifted Laplace") preconditioners, either for the pure
Helmholtz equation, or even the absorptive equation (), is
still missing. We present a new theory for the absorptive equation that
provides rates of convergence for (left- or right-) preconditioned GMRES, via
estimates of the norm and field of values of the preconditioned matrix. This
theory uses a - and -explicit coercivity result for the
underlying sesquilinear form and shows, for example, that if , then classical overlapping additive Schwarz will perform optimally for
the absorptive problem, provided the subdomain and coarse mesh diameters are
carefully chosen. Extensive numerical experiments are given that support the
theoretical results. The theory for the absorptive case gives insight into how
its domain decomposition approximations perform as preconditioners for the pure
Helmholtz case . At the end of the paper we propose a
(scalable) multilevel preconditioner for the pure Helmholtz problem that has an
empirical computation time complexity of about for
solving finite element systems of size , where we have
chosen the mesh diameter to avoid the pollution effect.
Experiments on problems with , i.e. a fixed number of grid points
per wavelength, are also given
The M\"obius Domain Wall Fermion Algorithm
We present a review of the properties of generalized domain wall Fermions,
based on a (real) M\"obius transformation on the Wilson overlap kernel,
discussing their algorithmic efficiency, the degree of explicit chiral
violations measured by the residual mass () and the Ward-Takahashi
identities. The M\"obius class interpolates between Shamir's domain wall
operator and Bori\c{c}i's domain wall implementation of Neuberger's overlap
operator without increasing the number of Dirac applications per conjugate
gradient iteration. A new scaling parameter () reduces chiral
violations at finite fifth dimension () but yields exactly the same
overlap action in the limit . Through the use of 4d
Red/Black preconditioning and optimal tuning for the scaling , we
show that chiral symmetry violations are typically reduced by an order of
magnitude at fixed . At large we argue that the observed scaling for
for Shamir is replaced by for the
properly tuned M\"obius algorithm with Comment: 59 pages, 11 figure
Variational Data Assimilation via Sparse Regularization
This paper studies the role of sparse regularization in a properly chosen
basis for variational data assimilation (VDA) problems. Specifically, it
focuses on data assimilation of noisy and down-sampled observations while the
state variable of interest exhibits sparsity in the real or transformed domain.
We show that in the presence of sparsity, the -norm regularization
produces more accurate and stable solutions than the classic data assimilation
methods. To motivate further developments of the proposed methodology,
assimilation experiments are conducted in the wavelet and spectral domain using
the linear advection-diffusion equation
Preconditioning Kernel Matrices
The computational and storage complexity of kernel machines presents the
primary barrier to their scaling to large, modern, datasets. A common way to
tackle the scalability issue is to use the conjugate gradient algorithm, which
relieves the constraints on both storage (the kernel matrix need not be stored)
and computation (both stochastic gradients and parallelization can be used).
Even so, conjugate gradient is not without its own issues: the conditioning of
kernel matrices is often such that conjugate gradients will have poor
convergence in practice. Preconditioning is a common approach to alleviating
this issue. Here we propose preconditioned conjugate gradients for kernel
machines, and develop a broad range of preconditioners particularly useful for
kernel matrices. We describe a scalable approach to both solving kernel
machines and learning their hyperparameters. We show this approach is exact in
the limit of iterations and outperforms state-of-the-art approximations for a
given computational budget
- …