6 research outputs found
Extending the range of error estimates for radial approximation in Euclidean space and on spheres
We adapt Schaback's error doubling trick [R. Schaback. Improved error bounds
for scattered data interpolation by radial basis functions. Math. Comp.,
68(225):201--216, 1999.] to give error estimates for radial interpolation of
functions with smoothness lying (in some sense) between that of the usual
native space and the subspace with double the smoothness. We do this for both
bounded subsets of R^d and spheres. As a step on the way to our ultimate goal
we also show convergence of pseudoderivatives of the interpolation error.Comment: 10 page
Interpolation and Best Approximation for Spherical Radial Basis Function Networks
Within the conventional framework of a native space structure, a smooth kernel generates a
small native space, and radial basis functions stemming from the smooth kernel are intended to
approximate only functions from this small native space. In this paper, we embed the smooth
radial basis functions in a larger native space generated by a less smooth kernel and use them
to interpolate the samples. Our result shows that there exists a linear combination of spherical
radial basis functions that can both exactly interpolate samples generated by functions in the
larger native space and near best approximate the target function
Analytically divergence-free discretization methods for Darcy's problem
Radial basis functions are well known for their applications in scattered data approximation and interpolation. They can also be applied in collocation methods to solve partial differential equations. We develop and analyse a mesh-free discretization method for Darcy's problem. Our approximation scheme is based upon optimal recovery, which leads to a collocation scheme using divergence-free positive denite kernels. Besides producing analytically incompressible flow fields, our method can be of arbitrary order, works in arbitrary space dimension and for arbitrary geometries. Firstly we establish Darcy's problem. To introduce the scheme we review and study divergence-free and curl-free matrix-valued kernels and their reproducing kernel Hilbert spaces. After developing the scheme, we find the approximation error for smooth target functions and the optimal approximation orders. Furthermore, we develop Sobolev-type error estimates for target functions rougher than the approximating function and show that the approximation properties extend to those functions. To find these error estimates, we apply band-limited approximation. Finally, we illustrate the method with numerical examples
Approximation in rough native spaces by shifts of smooth kernels on spheres
Within the conventional framework of a native space structure, a smooth kernel generates a small native space, and “radial basis functions” stemming from the smooth kernel are intended to approximate only functions from this small native space. Therefore their approximation power is quite limited. Recently, Narcowich et al. and Narcowich and Ward, respectively, have studied two approaches that have led to the empowerment of smooth radial basis functions in a larger native space. In the approach of [NW], the radial basis function interpolates the target function at some scattered (prescribed) points. In both approaches, approximation power of the smooth radial basis functions is achieved by utilizing spherical polynomials of a (possibly) large degree to form an intermediate approximation between the radial basis approximation and the target function. In this paper, we take a new approach. We embed the smooth radial basis functions in a larger native space generated by a less smooth kernel, and use them to approximate functions from the larger native space. Among other results, we characterize the best approximant with respect to the metric of the larger native space to be the radial basis function that interpolates the target function on a set of finite scattered points after the action of a certain multiplier operator. We also establish the error bounds between the best approximant and the target function