131 research outputs found
Certifying floating-point implementations using Gappa
High confidence in floating-point programs requires proving numerical
properties of final and intermediate values. One may need to guarantee that a
value stays within some range, or that the error relative to some ideal value
is well bounded. Such work may require several lines of proof for each line of
code, and will usually be broken by the smallest change to the code (e.g. for
maintenance or optimization purpose). Certifying these programs by hand is
therefore very tedious and error-prone. This article discusses the use of the
Gappa proof assistant in this context. Gappa has two main advantages over
previous approaches: Its input format is very close to the actual C code to
validate, and it automates error evaluation and propagation using interval
arithmetic. Besides, it can be used to incrementally prove complex mathematical
properties pertaining to the C code. Yet it does not require any specific
knowledge about automatic theorem proving, and thus is accessible to a wide
community. Moreover, Gappa may generate a formal proof of the results that can
be checked independently by a lower-level proof assistant like Coq, hence
providing an even higher confidence in the certification of the numerical code.
The article demonstrates the use of this tool on a real-size example, an
elementary function with correctly rounded output
Computing Correctly Rounded Integer Powers in Floating-Point Arithmetic
23 pagesWe introduce several algorithms for accurately evaluating powers to a positive integer in floating-point arithmetic, assuming a fused multiply-add (fma) instruction is available. We aim at always obtaining correctly-rounded results in round-to-nearest mode, that is, our algorithms return the floating-point number that is nearest the exact value
A certified infinite norm for the implementation of elementary functions
The version available on HAL is slightly different from the published version because it contains full proofs.International audienceThe high-quality floating-point implementation of useful functions f : R -> R, such as exp, sin, erf requires bounding the error eps = (p-f)/f of an approximation p with regard to the function f. This involves bounding the infinite norm ||eps|| of the error function. Its value must not be underestimated when implementations must be safe. Previous approaches for computing infinite norm are shown to be either unsafe, not sufficiently tight or too tedious in manual work. We present a safe and self-validating algorithm for automatically upper- and lower-bounding infinite norms of error functions. The algorithm is based on enhanced interval arithmetic. It can overcome high cancellation and high condition number around points where the error function is defined only by continuous extension. The given algorithm is implemented in a software tool. It can generate a proof of correctness for each instance on which it is run
Implementation and Synthesis of Math Library Functions
Achieving speed and accuracy for math library functions like exp, sin, and
log is difficult. This is because low-level implementation languages like C do
not help math library developers catch mathematical errors, build
implementations incrementally, or separate high-level and low-level decision
making. This ultimately puts development of such functions out of reach for all
but the most experienced experts. To address this, we introduce MegaLibm, a
domain-specific language for implementing, testing, and tuning math library
implementations. MegaLibm is safe, modular, and tunable. Implementations in
MegaLibm can automatically detect mathematical mistakes like sign flips via
semantic wellformedness checks, and components like range reductions can be
implemented in a modular, composable way, simplifying implementations. Once the
high-level algorithm is done, tuning parameters like working precisions and
evaluation schemes can be adjusted through orthogonal tuning parameters to
achieve the desired speed and accuracy. MegaLibm also enables math library
developers to work interactively, compiling, testing, and tuning their
implementations and invoking tools like Sollya and type-directed synthesis to
complete components and synthesize entire implementations. MegaLibm can express
8 state-of-the-art math library implementations with comparable speed and
accuracy to the original C code, and can synthesize 5 variations and 3
from-scratch implementations with minimal guidance.Comment: 25 pages, 12 figure
Exact and mid-point rounding cases of power(x,y)
Research Report N° RR2006-46Correct rounding of the power function is currently based on an iterative process computing more and more accurate intermediate approximations to x^y until rounding correctly becomes possible. It terminates iff the value of the function is not exactly a floating-point number or an midpoint of two floating-point numbers of the format. For other elementary functions such as exp(x), arguments for which f(x) is such an exact case are, few. They can be filtered out by simple tests. The power function has at least 2^35 such arguments. Simple tests on x and y do not suffice here. This article presents an algorithm for performing such an exact case test. It is combined with an approach that allows for fast rejection of cases that are not exact or mid-point. The correctness is completely proven. It makes no usage of costly operations such as divisions, remainders or square roots as previous approaches do. The algorithm yields a speed-up of 1.8 on average in comparison to another implementation for the same final target format. It reduces also the percentage of average time needed for the exactness test from 38% at each call to 31% under unlikely conditions. The algorithm is given for double precision but adapts and scales perfectly for higher precisions such as double-extended and quad precision. Its complexity is linear in the precision
An efficient rounding boundary test for pow(x,y) in double precision
18 pagesThe correct rounding of the function pow: (x,y) -> x^y is currently based on Ziv's iterative approximation process. In order to ensure its termination, cases when x^y falls on a rounding boundary must be filtered out. Such rounding boundaries are floating-point numbers and midpoints between two consecutive floating-point numbers. Detecting rounding boundaries for pow is a difficult problem. Previous approaches use repeated square root extraction followed by repeated square and multiply. This article presents a new rounding boundary test for pow in double precision which resumes to a few comparisons with pre-computed constants. These constants are deduced from worst cases for the Table Maker's Dilemma, searched over a small subset of the input domain. This is a novel use of such worst-case bounds. The resulting algorithm has been designed for a fast-on-average correctly rounded implementation of pow, considering the scarcity of rounding boundary cases. It does not stall average computations for rounding boundary detection. The article includes its correction proof and experimental results
Recommended from our members
Software-based approximate computing for mathematical functions
The four arithmetic floating-point operations (+,−,÷and×) have been precisely specified in IEEE-754 since 1985, but the situation for floating-point mathematical libraries and even some hardware operations such as fused multiply-add is more nuanced as there are varying opinions on which standards should be followed and when it is acceptable to allow some error or when it is necessary to be correctly-rounded. Deterministic correctly-rounded elementary mathematical functions are important in many applications. Others are tolerant to some level of error and would benefit from less accurate, better-performing approximations. We found that, despite IEEE-754 (2008 and 2019 only)specifying that ‘recommended functions’ such as sin, cos or log should be correctly rounded, the mathematical libraries available through standard interfaces in popular programming languages provide neither correct-rounding nor maximally performing approximations, partly due to the differing accuracy requirements of these functions in conflicting standards provided for some languages, such as C. This dissertation seeks to explore the current methods used for the implementation of mathematical functions, show the error present in them and demonstrate methods to produce both low-cost correctly-rounded solutions and better approximations for specific use-cases. This is achieved by: First, exploring the error within existing mathematical libraries and examining how it is impacting existing applications and the development of programming language standards. We then make two contributions which address the accuracy and standard conformance problems that were found: 1) an approach for a correctly-rounded 32-bit implementation of the elementary functions with minimal additional performance cost on modern hardware; and 2) an approach for developing a better performing incorrectly-rounded solution for use when some error is acceptable and conforming with the IEEE-754 standard is not a requirement. For the
latter contribution, we introduce a tool for semi-automated generic code sensitivity analysis and approximation. Next, we target the creation of approximations for the standard activation functions used in neural networks. Identifying that significant time is spent in the computation of the activation functions, we generate approximations with different levels of error and better performance characteristics. These functions are then tested in standard neural networks to determine if the approximations have any detrimental effect on the output of the network. We show that, for many networks and activation functions, very coarse approximations are suitable replacements to train the networks equally well at a lower overall time cost. This dissertation makes original contributions to the area of approximate computing. We demonstrate new approaches to safe-approximation and justify approximate computation generally by showing that existing mathematical libraries are already suffering the downsides of approximation and latent error without fully exploiting the optimisation space available due to the existing tolerance to that error and showing that correctly-rounded solutions are possible without a significant performance impact for many 32-bit mathematical functions
- …