180 research outputs found
Compensated Horner algorithm in K times the working precision
We introduce an algorithm to evaluate a polynomial with floating point coefficients as accurately as the Horner scheme performed in K times the working precision, for K an arbitrary integer. The principle is to iterate the error-free transformation of the compensated Horner algorithm and to accurately sum the final decomposition. We prove this accuracy property with an apriori error analysis. We illustrate its practical efficiency with numerical experiments on significant environments and IEEE-754 arithmetic. Comparing to existing alternatives we conclude that this K-times compensated algorithm is competitive for K up to 4, i.e. up to 212 mantissa bits
The Howe-Moore property for real and p-adic groups
We consider in this paper a relative version of the Howe-Moore Property,
about vanishing at infinity of coefficients of unitary representations. We
characterize this property in terms of ergodic measure-preserving actions. We
also characterize, for linear Lie groups or p-adic Lie groups, the pairs with
the relative Howe-Moore Property with respect to a closed, normal subgroup.
This involves, in one direction, structural results on locally compact groups
all of whose proper closed characteristic subgroups are compact, and, in the
other direction, some results about the vanishing at infinity of oscillatory
integrals.Comment: 25 pages, no figur
Compensated Horner Scheme
Using error-free transformations, we improve the classic Horner Scheme (HS)
to evaluate (univariate) polynomials in floating point arithmetic.
We prove that this Compensated Horner Scheme (CHS) is as accurate as HS
performed with twice the working precision.
Theoretical analysis and experiments exhibit a reasonable running time
overhead being also more interesting than double-double implementations.
We introduce a dynamic and validated error bound of the CHS computed value.
The talk presents these results together with a survey about error-free
transformations and related hypothesis
Algorithms for Accurate, Validated and Fast Polynomial Evaluation
International audienceWe survey a class of algorithms to evaluate polynomials with floating point coefficients and for computation performed with IEEE-754 floating point arithmetic. The principle is to apply, once or recursively, an error-free transformation of the polynomial evaluation with the Horner algorithm and to accurately sum the final decomposition. These compensated algorithms are as accurate as the Horner algorithm performed in K times the working precision, for K an arbitrary integer. We prove this accuracy property with an \apriori error analysis. We also provide validated dynamic bounds and apply these results to compute a faithfully rounded evaluation. These compensated algorithms are fast. We illustrate their practical efficiency with numerical experiments on significant environments. Comparing to existing alternatives these K-times compensated algorithms are competitive for K up to 4, i.e., up to 212 mantissa bits
Algorithmes compensés en arithmétique flottante : précision, validation, performances
Rounding error may totally corrupt the result of a floating point computation. How to improve and validate the accuracy of a floating point computation, without large computing time overheads ? We consider two case studies: polynomial evaluation and linear triangular system solving. In both cases we use compensation of the rounding errors to improve the accuracy of the computed result. The contributions of this work are divided into three levels.1) Improving the accuracy.We propose a compensated Horner scheme that computes polynomial evaluation with the same accuracy as the classic Horner algorithm performed in twice the working precision. Generalizing this algorithm, we present another compensated version of the Horner scheme simulating K times the working precision (K>1). We also show how to compensate the rounding errors generated by the substitution algorithm for triangular system solving.2) Validating the computed result.We show how to validate the quality of the compensated polynomial evaluation. We propose a method to compute an "a posteriori" error bound together with the compensated result. This error bound is computed using only basic floating point operations, which ensures portability and efficiency of the method.3) Performances of compensated algorithms.Our computing time measures show the interest of compensated algorithms compared to other software solutions that provide the same output accuracy. We also justify good practical performances of compensated algorithms thanks to a detailed study of the instruction-level parallelism they contain.Les erreurs d'arrondi peuvent dĂ©grader la prĂ©cision d'un calcul en arithmĂ©tique flottante. Comment amĂ©liorer et valider la prĂ©cision d'un rĂ©sultat calculĂ©, tout en conservant de bonnes performances pratiques ? Nous utilisons la compensation des erreurs d'arrondi au travers de deux exemples : l'Ă©valuation polynomiale et la rĂ©solution de systĂšmeslinĂ©aires triangulaires. Nos contributions se situent Ă trois niveaux.1) AmĂ©lioration de la prĂ©cision du rĂ©sultat.Nous proposons un schĂ©ma de Horner compensĂ©, qui permet une Ă©valuation polynomiale aussi prĂ©cise que celle calculĂ©e par le schĂ©ma de Horner classique exĂ©cutĂ© avec une prĂ©cision interne doublĂ©e. En gĂ©nĂ©ralisant cet algorithme, nous proposons une autre version compensĂ©e du schĂ©ma de Horner simulant K fois la prĂ©cision de travail (K>1). Nous montrons Ă©galement comment compenser les erreurs d'arrondis gĂ©nĂ©rĂ©es par l'algorithme de substitution pour la rĂ©solution de systĂšmes triangulaires.2) Validation de la qualitĂ© du rĂ©sultat.Nous montrons comment valider la qualitĂ© du rĂ©sultat de l'Ă©valuation polynomiale compensĂ©e, en proposant le calcul d'une borne d'erreur "a posteriori" qui ne repose que sur des opĂ©rations Ă©lĂ©mentaires de l'arithmĂ©tique flottante: cela assure la portabilitĂ© de la validation et de bonnes performances pratiques.3) Performances des algorithmes compensĂ©s.Nos mesures de performances montrent l'intĂ©rĂȘt pratique des algorithmes compensĂ©s face aux autres solutions logicielles simulant une prĂ©cision Ă©quivalente. Nous justifions ces bonnes performances par une Ă©tude dĂ©taillĂ©e du parallĂ©lisme d'instructions qu'ils prĂ©sentent
A Library for Symbolic Floating-Point Arithmetic
To analyze a priori the accuracy of an algorithm in oating-point arithmetic, one usually derives a uniform error bound on the output, valid for most inputs and parametrized by the precision p. To show further that this bound is sharp, a common way is to build an input example for which the error committed by the algorithm comes close to that bound, or even attains it. Such inputs may be given as oating-point numbers in one of the IEEE standard formats (say, for p = 53) or, more generally, as expressions parametrized by p, that can be viewed as symbolic oating-point numbers. With such inputs, a sharpness result can thus be established for virtually all reasonable formats instead of just one of them. This, however, requires the ability to run the algorithm on those inputs and, in particular, to compute the correctly-rounded sum, product, or ratio of two symbolic oating-point numbers. The goal of this paper is to show how these basic arithmetic operations can be performed automatically. We introduce a way to model symbolic oating-point data, and present algorithms for round-to-nearest addition, multiplication, fused multiply-add, and division. An implementation as a Maple library is also described, and experiments using examples from the literature are provided to illustrate its interest in practice
Sharp error bounds for complex floating-point inversion
International audienceWe study the accuracy of the classic algorithm for inverting a complex number given by its real and imaginary parts as floating-point numbers. Our analyses are done in binary floating-point arithmetic, with an unbounded exponent range and in precision ; we also assume that the basic arithmetic operations (, , , ) are rounded to nearest, so that the roundoff unit is . We bound the largest relative error in the computed inverse either in the componentwise or in the normwise sense. We prove the componentwise relative error bound for the complex inversion algorithm (assuming ), and we show that this bound is asymptotically optimal (as ) when is even, and sharp when using one of the basic IEEE 754 binary formats with an odd precision (). This componentwise bound obviously leads to the same bound for the normwise relative error. However, we prove that the smaller bound holds (assuming ) for the normwise relative error, and we illustrate the sharpness of this bound for the basic IEEE 754 binary formats () using numerical examples
Computing Correctly Rounded Integer Powers in Floating-Point Arithmetic
23 pagesWe introduce several algorithms for accurately evaluating powers to a positive integer in floating-point arithmetic, assuming a fused multiply-add (fma) instruction is available. We aim at always obtaining correctly-rounded results in round-to-nearest mode, that is, our algorithms return the floating-point number that is nearest the exact value
- âŠ