263 research outputs found

    Efficient Evaluation of Matrix Polynomials beyond the Paterson-Stockmeyer Method

    Full text link
    [EN] Recently, two general methods for evaluating matrix polynomials requiring one matrix product less than the Paterson-Stockmeyer method were proposed, where the cost of evaluating a matrix polynomial is given asymptotically by the total number of matrix product evaluations. An analysis of the stability of those methods was given and the methods have been applied to Taylor-based implementations for computing the exponential, the cosine and the hyperbolic tangent matrix functions. Moreover, a particular example for the evaluation of the matrix exponential Taylor approximation of degree 15 requiring four matrix products was given, whereas the maximum polynomial degree available using Paterson-Stockmeyer method with four matrix products is 9. Based on this example, a new family of methods for evaluating matrix polynomials more efficiently than the Paterson-Stockmeyer method was proposed, having the potential to achieve a much higher efficiency, i.e., requiring less matrix products for evaluating a matrix polynomial of certain degree, or increasing the available degree for the same cost. However, the difficulty of these family of methods lies in the calculation of the coefficients involved for the evaluation of general matrix polynomials and approximations. In this paper, we provide a general matrix polynomial evaluation method for evaluating matrix polynomials requiring two matrix products less than the Paterson-Stockmeyer method for degrees higher than 30. Moreover, we provide general methods for evaluating matrix polynomial approximations of degrees 15 and 21 with four and five matrix product evaluations, respectively, whereas the maximum available degrees for the same cost with the Paterson-Stockmeyer method are 9 and 12, respectively. Finally, practical examples for evaluating Taylor approximations of the matrix cosine and the matrix logarithm accurately and efficiently with these new methods are given.This research was partially funded by the European Regional Development Fund (ERDF) and the Spanish Ministerio de Economia y Competitividad grant TIN2017-89314-P, and by the Programa de Apoyo a la Investigacion y Desarrollo 2018 of the Universitat Politecnica de Valencia grant PAID-06-18-SP20180016.Sastre, J.; IbĂĄĂąez GonzĂĄlez, JJ. (2021). Efficient Evaluation of Matrix Polynomials beyond the Paterson-Stockmeyer Method. Mathematics. 9(14):1-23. https://doi.org/10.3390/math9141600S12391

    Verified compilation and optimization of floating-point kernels

    Get PDF
    When verifying safety-critical code on the level of source code, we trust the compiler to produce machine code that preserves the behavior of the source code. Trusting a verified compiler is easy. A rigorous machine-checked proof shows that the compiler correctly translates source code into machine code. Modern verified compilers (e.g. CompCert and CakeML) have rich input languages, but only rudimentary support for floating-point arithmetic. In fact, state-of-the-art verified compilers only implement and verify an inflexible one-to-one translation from floating-point source code to machine code. This translation completely ignores that floating-point arithmetic is actually a discrete representation of the continuous real numbers. This thesis presents two extensions improving floating-point arithmetic in CakeML. First, the thesis demonstrates verified compilation of elementary functions to floating-point code in: Dandelion, an automatic verifier for polynomial approximations of elementary functions; and libmGen, a proof-producing compiler relating floating-point machine code to the implemented real-numbered elementary function. Second, the thesis demonstrates verified optimization of floating-point code in: Icing, a floating-point language extending standard floating-point arithmetic with optimizations similar to those used by unverified compilers, like GCC and LLVM; and RealCake, an extension of CakeML with Icing into the first fully verified optimizing compiler for floating-point arithmetic.Bei der Verifizierung von sicherheitsrelevantem Quellcode vertrauen wir dem Compiler, dass er Maschinencode ausgibt, der sich wie der Quellcode verhält. Man kann ohne weiteres einem verifizierten Compiler vertrauen. Ein rigoroser maschinen-ü}berprüfter Beweis zeigt, dass der Compiler Quellcode in korrekten Maschinencode übersetzt. Moderne verifizierte Compiler (z.B. CompCert und CakeML) haben komplizierte Eingabesprachen, aber unterstützen Gleitkommaarithmetik nur rudimentär. De facto implementieren und verifizieren hochmoderne verifizierte Compiler für Gleitkommaarithmetik nur eine starre eins-zu-eins Übersetzung von Quell- zu Maschinencode. Diese Übersetzung ignoriert vollständig, dass Gleitkommaarithmetik eigentlich eine diskrete Repräsentation der kontinuierlichen reellen Zahlen ist. Diese Dissertation präsentiert zwei Erweiterungen die Gleitkommaarithmetik in CakeML verbessern. Zuerst demonstriert die Dissertation verifizierte Übersetzung von elementaren Funktionen in Gleitkommacode mit: Dandelion, einem automatischen Verifizierer für Polynomapproximierungen von elementaren Funktionen; und libmGen, einen Beweis-erzeugenden Compiler der Gleitkommacode in Relation mit der implementierten elementaren Funktion setzt. Dann demonstriert die Dissertation verifizierte Optimierung von Gleitkommacode mit: Icing, einer Gleitkommasprache die Gleitkommaarithmetik mit Optimierungen erweitert die ähnlich zu denen in unverifizierten Compilern, wie GCC und LLVM, sind; und RealCake, eine Erweiterung von CakeML mit Icing als der erste vollverifizierte Compiler für Gleitkommaarithmetik

    Generation and evaluation of orthogonal polynomials in discrete Sobolev spaces II: numerical stability

    Get PDF
    AbstractIn this paper, we concern ourselves with the determination and evaluation of polynomials that are orthogonal with respect to a general discrete Sobolev inner product, that is, an ordinary inner product on the real line plus a finite sum of atomic inner products involving a finite number of derivatives. In a previous paper we provided a complete set of formulas to compute the coefficients of this recurrence. Here, we study the numerical stability of these algorithms for the generation and evaluation of a finite series of Sobolev orthogonal polynomials. Besides, we propose several techniques for reducing and controlling the rounding errors via theoretical running error bounds and a carefully chosen recurrence
    • …
    corecore