70,575 research outputs found

    Fast Taylor polynomial evaluation for the computation of the matrix cosine

    Full text link
    [EN] In this work we introduce a new method to compute the matrix cosine. It is based on recent new matrix polynomial evaluation methods for the Taylor approximation and a mixed forward and backward error analysis. The matrix polynomial evaluation methods allow to evaluate the Taylor polynomial approximation of the matrix cosine function more efficiently than using Paterson-Stockmeyer method. A sequential Matlab implementation of the new algorithm is provided, giving better efficiency and accuracy than state-of-the-art algorithms. Moreover, we provide an implementation in Matlab that can use NVIDIA CPUs easily and efficiently. (C) 2018 Elsevier B.V. All rights reserved.This work has been partially supported by Spanish Ministerio de Economía y Competitividad and European Regional Development Fund (ERDF) grants TIN2014-59294-P, and TIN2017-89314-P.Sastre, J.; Ibáñez González, JJ.; Alonso-Jordá, P.; Peinado Pinilla, J.; Defez Candel, E. (2019). Fast Taylor polynomial evaluation for the computation of the matrix cosine. Journal of Computational and Applied Mathematics. 354:641-650. https://doi.org/10.1016/j.cam.2018.12.041S64165035

    Fast Computation of Sums of Gaussians in High Dimensions

    Get PDF
    Evaluating sums of multivariate Gaussian kernels is a key computational task in many problems in computational statistics and machine learning. The computational cost of the direct evaluation of such sums scales as the product of the number of kernel functions and the evaluation points. The fast Gauss transform proposed by Greengard and Strain (1991) is a ϵ\epsilon-exact approximation algorithm that reduces the computational complexity of the evaluation of the sum of NN Gaussians at MM points in dd dimensions from O(MN)\mathcal{O}(MN) to O(M+N)\mathcal{O}(M+N). However, the constant factor in O(M+N)\mathcal{O}(M+N) grows exponentially with increasing dimensionality dd, which makes the algorithm impractical for dimensions greater than three. In this paper we present a new algorithm where the constant factor is reduced to asymptotically polynomial order. The reduction is based on a new multivariate Taylor's series expansion (which can act both as a local as well as a far field expansion) scheme combined with the efficient space subdivision using the kk-center algorithm. The proposed method differs from the original fast Gauss transform in terms of a different factorization, efficient space subdivision, and the use of point-wise error bounds. Algorithm details, error bounds, procedure to choose the parameters and numerical experiments are presented. As an example we shows how the proposed method can be used for very fast ϵ\epsilon-exact multivariate kernel density estimation

    Fast OPED algorithm for reconstruction of images from Radon data

    Full text link
    A fast implementation of the OPED algorithm, a reconstruction algorithm for Radon data introduced recently, is proposed and tested. The new implementation uses FFT for discrete sine transform and an interpolation step. The convergence of the fast implementation is proved under the condition that the function is mildly smooth. The numerical test shows that the accuracy of the OPED algorithm changes little when the fast implementation is used.Comment: 13 page

    New Acceleration of Nearly Optimal Univariate Polynomial Root-findERS

    Full text link
    Univariate polynomial root-finding has been studied for four millennia and is still the subject of intensive research. Hundreds of efficient algorithms for this task have been proposed. Two of them are nearly optimal. The first one, proposed in 1995, relies on recursive factorization of a polynomial, is quite involved, and has never been implemented. The second one, proposed in 2016, relies on subdivision iterations, was implemented in 2018, and promises to be practically competitive, although user's current choice for univariate polynomial root-finding is the package MPSolve, proposed in 2000, revised in 2014, and based on Ehrlich's functional iterations. By proposing and incorporating some novel techniques we significantly accelerate both subdivision and Ehrlich's iterations. Moreover our acceleration of the known subdivision root-finders is dramatic in the case of sparse input polynomials. Our techniques can be of some independent interest for the design and analysis of polynomial root-finders.Comment: 89 pages, 5 figures, 2 table
    • …
    corecore