268 research outputs found

    How fast do radial basis function interpolants of analytic functions converge?

    Get PDF
    The question in the title is answered using tools of potential theory. Convergence and divergence rates of interpolants of analytic functions on the unit interval are analyzed. The starting point is a complex variable contour integral formula for the remainder in RBF interpolation. We study a generalized Runge phenomenon and explore how the location of centers and affects convergence. Special attention is given to Gaussian and inverse quadratic radial functions, but some of the results can be extended to other smooth basis functions. Among other things, we prove that, under mild conditions, inverse quadratic RBF interpolants of functions that are analytic inside the strip ∣Im(z)∣<(1/2ϵ)|Im(z)| < (1/2\epsilon), where ϵ\epsilon is the shape parameter, converge exponentially

    The Effect of Quadrature Errors in the Computation of L^2 Piecewise Polynomial Approximations

    Get PDF
    In this paper we investigate the L^2 piecewise polynomial approximation problem. L^2 bounds for the derivatives of the error in approximating sufficiently smooth functions by polynomial splines follow immediately from the analogous results for polynomial spline interpolation. We derive L^2 bounds for the errors introduced by the use of two types of quadrature rules for the numerical computation of L^2 piecewise polynomial approximations. These bounds enable us to present some asymptotic results and to examine the consistent convergence of appropriately chosen sequences of such approximations. Some numerical results are also included

    Parallel Magnetic Resonance Imaging as Approximation in a Reproducing Kernel Hilbert Space

    Full text link
    In Magnetic Resonance Imaging (MRI) data samples are collected in the spatial frequency domain (k-space), typically by time-consuming line-by-line scanning on a Cartesian grid. Scans can be accelerated by simultaneous acquisition of data using multiple receivers (parallel imaging), and by using more efficient non-Cartesian sampling schemes. As shown here, reconstruction from samples at arbitrary locations can be understood as approximation of vector-valued functions from the acquired samples and formulated using a Reproducing Kernel Hilbert Space (RKHS) with a matrix-valued kernel defined by the spatial sensitivities of the receive coils. This establishes a formal connection between approximation theory and parallel imaging. Theoretical tools from approximation theory can then be used to understand reconstruction in k-space and to extend the analysis of the effects of samples selection beyond the traditional g-factor noise analysis to both noise amplification and approximation errors. This is demonstrated with numerical examples.Comment: 28 pages, 7 figure

    Discrete Sparse Fourier Hermite Approximations in High Dimensions

    Get PDF
    In this dissertation, the discrete sparse Fourier Hermite approximation of a function in a specified Hilbert space of arbitrary dimension is defined, and theoretical error bounds of the numerically computed approximation are proven. Computing the Fourier Hermite approximation in high dimensions suffers from the well-known curse of dimensionality. In short, as the ambient dimension increases, the complexity of the problem grows until it is impossible to numerically compute a solution. To circumvent this difficulty, a sparse, hyperbolic cross shaped set, that takes advantage of the natural decaying nature of the Fourier Hermite coefficients, is used to serve as an index set for the approximation. The Fourier Hermite coefficients must be numerically estimated since they are nearly impossible to compute exactly, except in trivial cases. However, care must be taken to compute them numerically, since the integrals involve oscillatory terms. To closely approximate the integrals that appear in the approximated Fourier Hermite coefficients, a multiscale quadrature method is used. This quadrature method is implemented through an algorithm that takes advantage of the natural properties of the Hermite polynomials for fast results. The definitions of the sparse index set and of the quadrature method used will each introduce many interdependent parameters. These parameters give a user many degrees of freedom to tailor the numerical procedure to meet his or her desired speed and accuracy goals. Default guidelines of how to choose these parameters for a general function f that will significantly reduce the computational cost over any naive computational methods without sacrificing accuracy are presented. Additionally, many numerical examples are included to support the complexity and accuracy claims of the proposed algorithm

    The linear algebra of interpolation with finite applications giving computational methods for multivariate polynomials

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 1988Linear representation and the duality of the biorthonormality relationship express the linear algebra of interpolation by way of the evaluation mapping. In the finite case the standard bases relate the maps to Gramian matrices. Five equivalent conditions on these objects are found which characterize the solution of the interpolation problem. This algebra succinctly describes the solution space of ordinary linear initial value problems. Multivariate polynomial spaces and multidimensional node sets are described by multi-index sets. Geometric considerations of normalization and dimensionality lead to cardinal bases for Lagrange interpolation on regular node sets. More general Hermite functional sets can also be solved by generalized Newton methods using geometry and multi-indices. Extended to countably infinite spaces, the method calls upon theorems of modern analysis
    • …
    corecore