6,705 research outputs found

    On the Schoenberg Transformations in Data Analysis: Theory and Illustrations

    Get PDF
    The class of Schoenberg transformations, embedding Euclidean distances into higher dimensional Euclidean spaces, is presented, and derived from theorems on positive definite and conditionally negative definite matrices. Original results on the arc lengths, angles and curvature of the transformations are proposed, and visualized on artificial data sets by classical multidimensional scaling. A simple distance-based discriminant algorithm illustrates the theory, intimately connected to the Gaussian kernels of Machine Learning

    Including parameter dependence in the data and covariance for cosmological inference

    Full text link
    The final step of most large-scale structure analyses involves the comparison of power spectra or correlation functions to theoretical models. It is clear that the theoretical models have parameter dependence, but frequently the measurements and the covariance matrix depend upon some of the parameters as well. We show that a very simple interpolation scheme from an unstructured mesh allows for an efficient way to include this parameter dependence self-consistently in the analysis at modest computational expense. We describe two schemes for covariance matrices. The scheme which uses the geometric structure of such matrices performs roughly twice as well as the simplest scheme, though both perform very well.Comment: 17 pages, 4 figures, matches version published in JCA

    Rapid evaluation of radial basis functions

    Get PDF
    Over the past decade, the radial basis function method has been shown to produce high quality solutions to the multivariate scattered data interpolation problem. However, this method has been associated with very high computational cost, as compared to alternative methods such as finite element or multivariate spline interpolation. For example. the direct evaluation at M locations of a radial basis function interpolant with N centres requires O(M N) floating-point operations. In this paper we introduce a fast evaluation method based on the Fast Gauss Transform and suitable quadrature rules. This method has been applied to the Hardy multiquadric, the inverse multiquadric and the thin-plate spline to reduce the computational complexity of the interpolant evaluation to O(M + N) floating point operations. By using certain localisation properties of conditionally negative definite functions this method has several performance advantages against traditional hierarchical rapid summation methods which we discuss in detail
    • 

    corecore