131 research outputs found

    Certifying floating-point implementations using Gappa

    Full text link
    High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output

    Computing Correctly Rounded Integer Powers in Floating-Point Arithmetic

    Get PDF
    23 pagesWe introduce several algorithms for accurately evaluating powers to a positive integer in floating-point arithmetic, assuming a fused multiply-add (fma) instruction is available. We aim at always obtaining correctly-rounded results in round-to-nearest mode, that is, our algorithms return the floating-point number that is nearest the exact value

    A certified infinite norm for the implementation of elementary functions

    Get PDF
    The version available on HAL is slightly different from the published version because it contains full proofs.International audienceThe high-quality floating-point implementation of useful functions f : R -> R, such as exp, sin, erf requires bounding the error eps = (p-f)/f of an approximation p with regard to the function f. This involves bounding the infinite norm ||eps|| of the error function. Its value must not be underestimated when implementations must be safe. Previous approaches for computing infinite norm are shown to be either unsafe, not sufficiently tight or too tedious in manual work. We present a safe and self-validating algorithm for automatically upper- and lower-bounding infinite norms of error functions. The algorithm is based on enhanced interval arithmetic. It can overcome high cancellation and high condition number around points where the error function is defined only by continuous extension. The given algorithm is implemented in a software tool. It can generate a proof of correctness for each instance on which it is run

    Implementation and Synthesis of Math Library Functions

    Full text link
    Achieving speed and accuracy for math library functions like exp, sin, and log is difficult. This is because low-level implementation languages like C do not help math library developers catch mathematical errors, build implementations incrementally, or separate high-level and low-level decision making. This ultimately puts development of such functions out of reach for all but the most experienced experts. To address this, we introduce MegaLibm, a domain-specific language for implementing, testing, and tuning math library implementations. MegaLibm is safe, modular, and tunable. Implementations in MegaLibm can automatically detect mathematical mistakes like sign flips via semantic wellformedness checks, and components like range reductions can be implemented in a modular, composable way, simplifying implementations. Once the high-level algorithm is done, tuning parameters like working precisions and evaluation schemes can be adjusted through orthogonal tuning parameters to achieve the desired speed and accuracy. MegaLibm also enables math library developers to work interactively, compiling, testing, and tuning their implementations and invoking tools like Sollya and type-directed synthesis to complete components and synthesize entire implementations. MegaLibm can express 8 state-of-the-art math library implementations with comparable speed and accuracy to the original C code, and can synthesize 5 variations and 3 from-scratch implementations with minimal guidance.Comment: 25 pages, 12 figure

    Exact and mid-point rounding cases of power(x,y)

    Get PDF
    Research Report N° RR2006-46Correct rounding of the power function is currently based on an iterative process computing more and more accurate intermediate approximations to x^y until rounding correctly becomes possible. It terminates iff the value of the function is not exactly a floating-point number or an midpoint of two floating-point numbers of the format. For other elementary functions such as exp(x), arguments for which f(x) is such an exact case are, few. They can be filtered out by simple tests. The power function has at least 2^35 such arguments. Simple tests on x and y do not suffice here. This article presents an algorithm for performing such an exact case test. It is combined with an approach that allows for fast rejection of cases that are not exact or mid-point. The correctness is completely proven. It makes no usage of costly operations such as divisions, remainders or square roots as previous approaches do. The algorithm yields a speed-up of 1.8 on average in comparison to another implementation for the same final target format. It reduces also the percentage of average time needed for the exactness test from 38% at each call to 31% under unlikely conditions. The algorithm is given for double precision but adapts and scales perfectly for higher precisions such as double-extended and quad precision. Its complexity is linear in the precision

    An efficient rounding boundary test for pow(x,y) in double precision

    Get PDF
    18 pagesThe correct rounding of the function pow: (x,y) -> x^y is currently based on Ziv's iterative approximation process. In order to ensure its termination, cases when x^y falls on a rounding boundary must be filtered out. Such rounding boundaries are floating-point numbers and midpoints between two consecutive floating-point numbers. Detecting rounding boundaries for pow is a difficult problem. Previous approaches use repeated square root extraction followed by repeated square and multiply. This article presents a new rounding boundary test for pow in double precision which resumes to a few comparisons with pre-computed constants. These constants are deduced from worst cases for the Table Maker's Dilemma, searched over a small subset of the input domain. This is a novel use of such worst-case bounds. The resulting algorithm has been designed for a fast-on-average correctly rounded implementation of pow, considering the scarcity of rounding boundary cases. It does not stall average computations for rounding boundary detection. The article includes its correction proof and experimental results
    corecore