67 research outputs found

    Exact and mid-point rounding cases of power(x,y)

    Get PDF
    Research Report N° RR2006-46Correct rounding of the power function is currently based on an iterative process computing more and more accurate intermediate approximations to x^y until rounding correctly becomes possible. It terminates iff the value of the function is not exactly a floating-point number or an midpoint of two floating-point numbers of the format. For other elementary functions such as exp(x), arguments for which f(x) is such an exact case are, few. They can be filtered out by simple tests. The power function has at least 2^35 such arguments. Simple tests on x and y do not suffice here. This article presents an algorithm for performing such an exact case test. It is combined with an approach that allows for fast rejection of cases that are not exact or mid-point. The correctness is completely proven. It makes no usage of costly operations such as divisions, remainders or square roots as previous approaches do. The algorithm yields a speed-up of 1.8 on average in comparison to another implementation for the same final target format. It reduces also the percentage of average time needed for the exactness test from 38% at each call to 31% under unlikely conditions. The algorithm is given for double precision but adapts and scales perfectly for higher precisions such as double-extended and quad precision. Its complexity is linear in the precision

    Certifying floating-point implementations using Gappa

    Full text link
    High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output

    An efficient rounding boundary test for pow(x,y) in double precision

    Get PDF
    18 pagesThe correct rounding of the function pow: (x,y) -> x^y is currently based on Ziv's iterative approximation process. In order to ensure its termination, cases when x^y falls on a rounding boundary must be filtered out. Such rounding boundaries are floating-point numbers and midpoints between two consecutive floating-point numbers. Detecting rounding boundaries for pow is a difficult problem. Previous approaches use repeated square root extraction followed by repeated square and multiply. This article presents a new rounding boundary test for pow in double precision which resumes to a few comparisons with pre-computed constants. These constants are deduced from worst cases for the Table Maker's Dilemma, searched over a small subset of the input domain. This is a novel use of such worst-case bounds. The resulting algorithm has been designed for a fast-on-average correctly rounded implementation of pow, considering the scarcity of rounding boundary cases. It does not stall average computations for rounding boundary detection. The article includes its correction proof and experimental results

    A certified infinite norm for the implementation of elementary functions

    Get PDF
    The version available on HAL is slightly different from the published version because it contains full proofs.International audienceThe high-quality floating-point implementation of useful functions f : R -> R, such as exp, sin, erf requires bounding the error eps = (p-f)/f of an approximation p with regard to the function f. This involves bounding the infinite norm ||eps|| of the error function. Its value must not be underestimated when implementations must be safe. Previous approaches for computing infinite norm are shown to be either unsafe, not sufficiently tight or too tedious in manual work. We present a safe and self-validating algorithm for automatically upper- and lower-bounding infinite norms of error functions. The algorithm is based on enhanced interval arithmetic. It can overcome high cancellation and high condition number around points where the error function is defined only by continuous extension. The given algorithm is implemented in a software tool. It can generate a proof of correctness for each instance on which it is run

    Certified and fast computation of supremum norms of approximation errors

    Get PDF
    The version available on HAL corresponds to the version initially submitted to the conference and slightly differs from the published version since it does not account for remarks made by the referees.International audienceIn many numerical programs there is a need for a high-quality floating-point approximation of useful functions f, such as exp, sin, erf. In the actual implementation, the function is replaced by a polynomial p, leading to an approximation error (absolute or relative) epsilon = p-f or epsilon = p/f-1. The tight yet certain bounding of this error is an important step towards safe implementations. The main difficulty of this problem is due to the fact that this approximation error is very small and the difference p-f is highly cancellating. In consequence, previous approaches for computing the supremum norm in this degenerate case, have proven to be either unsafe, not sufficiently tight or too tedious in manual work. We present a safe and fast algorithm that computes a tight lower and upper bound for the supremum norms of approximation errors. The algorithm is based on a combination of several techniques, including enhanced interval arithmetic, automatic differentiation and isolation of the roots of a polynomial. We have implemented our algorithm and timings on several examples are given

    Optimizing polynomials for floating-point implementation

    Get PDF
    The floating-point implementation of a function on an interval often reduces to polynomial approximation, the polynomial being typically provided by Remez algorithm. However, the floating-point evaluation of a Remez polynomial sometimes leads to catastrophic cancellations. This happens when some of the polynomial coefficients are very small in magnitude with respects to others. In this case, it is better to force these coefficients to zero, which also reduces the operation count. This technique, classically used for odd or even functions, may be generalized to a much larger class of functions. An algorithm is presented that forces to zero the smaller coefficients of the initial polynomial thanks to a modified Remez algorithm targeting an incomplete monomial basis. One advantage of this technique is that it is purely numerical, the function being used as a numerical black box. This algorithm is implemented within a larger polynomial implementation tool that is demonstrated on a range of examples, resulting in polynomials with less coefficients than those obtained the usual way.Comment: 12 page

    Emulating round-to-nearest-ties-to-zero "augmented" floating-point operations using round-to-nearest-ties-to-even arithmetic

    Get PDF
    The 2019 version of the IEEE 754 Standard for Floating-Point Arithmetic recommends that new “augmented” operations should be provided for the binary formats. These operations use a new “rounding direction”: round to nearest ties-to-zero. We show how they can be implemented using the currently available operations, using round-to-nearest ties-to-even with a partial formal proof of correctness

    Basic building blocks for a triple-double intermediate format

    Get PDF
    The implementation of correctly rounded elementary functions needs high intermediate accuracy before final rounding. This accuracy can be provided by (pseudo-) expansions of size three, i.e. a triple-double format. The report presents all basic operators for such a format. Triple-double numbers can be redundant. A renormalization procedure is presented and proven. Elementary functions' implementations need addition and multiplication sequences. These operators must take operands in double, double-double and triple-double format. The results must be accordingly in one of the formats. Several procedures are presented. Proofs are given for their accuracy bounds. Intermediate triple-double results must finally be correctly rounded to double precision. Two effective rounding sequences are presented, one for round-to-nearest mode, one for the directed rounding modes. Their complete proofs constitute half of the report.La mise en œuvre de fonctions élémentaires correctement arrondies nécessite l'utilisation d'un format intermédiaire de haute précision avant l'arrondi final. Cette précision peut être pourvue par des (pseudo-)expansions de taille trois, c'est-à-dire par un format triple-double. Ce rapport présente tous les opérateurs de base d'un tel format. Le format des nombres triple double est redondant, aussi une procédure de renormalisation est-elle présentée et prouvée. La mise en œuvre de fonctions élémentaire sa besoin de séquences d'addition et de multiplication. Ces opérateurs doivent être capables de prendre en argument des opérandes de format double, double-double ou triple-double. Leurs résultats doivent être dans un des formats correspondants. Un certain nombre de procédures sont présentées pour ces opérations avec des bornes prouvées pour leur précision. Les résultats intermédiaires en triple-double doivent finalement être arrondis correctement vers la double précision. Deux séquences d'arrondi final efficaces sont présentées, une pour l'arrondi au plus près, une autre pour les modes d'arrondi dirigés. Leur preuve complète constitue la moitié du rapport

    Comparison between binary and decimal floating-point numbers

    Get PDF
    International audienceWe introduce an algorithm to compare a binary floating-point (FP) number and a decimal FP number, assuming the "binary encoding" of the decimal formats is used, and with a special emphasis on the basic interchange formats specified by the IEEE 754-2008 standard for FP arithmetic. It is a two-step algorithm: a first pass, based on the exponents only, quickly eliminates most cases, then, when the first pass does not suffice, a more accurate second pass is performed. We provide an implementation of several variants of our algorithm, and compare them

    Efficient and accurate computation of upper bounds of approximation errors

    Get PDF
    International audienceFor purposes of actual evaluation, mathematical functions f are commonly replaced by approximation polynomials p. Examples include floating-point implementations of elementary functions, quadrature or more theoretical proof work involving transcendental functions. Replacing f by p induces a relative error epsilon = p/f - 1. In order to ensure the validity of the use of p instead of f, the maximum error, i.e. the supremum norm of epsilon must be safely bounded above. Numerical algorithms for supremum norms are efficient but cannot offer the required safety. Previous validated approaches often require tedious manual intervention. If they are automated, they have several drawbacks, such as the lack of quality guarantees. In this article a novel, automated supremum norm algorithm with a priori quality is proposed. It focuses on the validation step and paves the way for formally certified supremum norms. Key elements are the use of intermediate approximation polynomials with bounded approximation error and a non-negativity test based on a sum-of-squares expression of polynomials. The new algorithm was implemented in the Sollya tool. The article includes experimental results on real-life examples
    • …
    corecore