19 research outputs found
Localized bases for kernel spaces on the unit sphere
Approximation/interpolation from spaces of positive definite or conditionally
positive definite kernels is an increasingly popular tool for the analysis and
synthesis of scattered data, and is central to many meshless methods. For a set
of scattered sites, the standard basis for such a space utilizes
\emph{globally} supported kernels; computing with it is prohibitively expensive
for large . Easily computable, well-localized bases, with "small-footprint"
basis elements - i.e., elements using only a small number of kernels -- have
been unavailable. Working on \sphere, with focus on the restricted surface
spline kernels (e.g. the thin-plate splines restricted to the sphere), we
construct easily computable, spatially well-localized, small-footprint, robust
bases for the associated kernel spaces. Our theory predicts that each element
of the local basis is constructed by using a combination of only
kernels, which makes the construction computationally
cheap. We prove that the new basis is stable and satisfies polynomial
decay estimates that are stationary with respect to the density of the data
sites, and we present a quasi-interpolation scheme that provides optimal
approximation orders. Although our focus is on , much of the
theory applies to other manifolds - , the rotation group, and so
on. Finally, we construct algorithms to implement these schemes and use them to
conduct numerical experiments, which validate our theory for interpolation
problems on involving over one hundred fifty thousand data
sites.Comment: This article supersedes arXiv:1111.1013 "Better bases for kernel
spaces," which proved existence of better bases for various kernel spaces.
This article treats a smaller class of kernels, but presents an algorithm for
constructing better bases and demonstrates its effectiveness with more
elaborate examples. A quasi-interpolation scheme is introduced that provides
optimal linear convergence rate
Direct and Inverse Results on Bounded Domains for Meshless Methods via Localized Bases on Manifolds
This article develops direct and inverse estimates for certain finite
dimensional spaces arising in kernel approximation. Both the direct and inverse
estimates are based on approximation spaces spanned by local Lagrange functions
which are spatially highly localized. The construction of such functions is
computationally efficient and generalizes the construction given by the authors
for restricted surface splines on . The kernels for which the
theory applies includes the Sobolev-Mat\'ern kernels for closed, compact,
connected, Riemannian manifolds.Comment: 29 pages. To appear in Festschrift for the 80th Birthday of Ian Sloa
Variable Moving Average Transform Stitching Waves
A moving average transform in the plane with a variable size and shape window depending on the position and the ’time’ is studied. The main objective is to select the window parameters in such a way that the new transform converges smoothly to the identity transform at the boundary of a prescribed bounded plane region. A new approximation of solitary waves arising from Korteweg-de Vries equation is obtained based on results in the paper. Numerical implementation and examples are included
Preconditioning for radial basis function partition of unity methods
Meshfree radial basis function (RBF) methods are of interest for solving partial differential equations due to attractive convergence properties, flexibility with respect to geometry, and ease of implementation. For global RBF methods, the computational cost grows rapidly with dimension and problem size, so localised approaches, such as partition of unity or stencil based RBF methods, are currently being developed. An RBF partition of unity method (RBF--PUM) approximates functions through a combination of local RBF approximations. The linear systems that arise are locally unstructured, but with a global structure due to the partitioning of the domain. Due to the sparsity of the matrices, for large scale problems, iterative solution methods are needed both for computational reasons and to reduce memory requirements. In this paper we implement and test different algebraic preconditioning strategies based on the structure of the matrix in combination with incomplete factorisations. We compare their performance for different orderings and problem settings and find that a no-fill incomplete factorisation of the central band of the original discretisation matrix provides a robust and efficient preconditioner