1,401 research outputs found

    A Verified Certificate Checker for Finite-Precision Error Bounds in Coq and HOL4

    Full text link
    Being able to soundly estimate roundoff errors of finite-precision computations is important for many applications in embedded systems and scientific computing. Due to the discrepancy between continuous reals and discrete finite-precision values, automated static analysis tools are highly valuable to estimate roundoff errors. The results, however, are only as correct as the implementations of the static analysis tools. This paper presents a formally verified and modular tool which fully automatically checks the correctness of finite-precision roundoff error bounds encoded in a certificate. We present implementations of certificate generation and checking for both Coq and HOL4 and evaluate it on a number of examples from the literature. The experiments use both in-logic evaluation of Coq and HOL4, and execution of extracted code outside of the logics: we benchmark Coq extracted unverified OCaml code and a CakeML-generated verified binary

    A Mixed Real and Floating-Point Solver

    Get PDF
    Reasoning about mixed real and floating-point constraints is essential for developing accurate analysis tools for floating-point pro- grams. This paper presents FPRoCK, a prototype tool for solving mixed real and floating-point formulas. FPRoCK transforms a mixed formula into an equisatisfiable one over the reals. This formula is then solved using an off-the-shelf SMT solver. FPRoCK is also integrated with the PRECiSA static analyzer, which computes a sound estimation of the round-off error of a floating-point program. It is used to detect infeasible computational paths, thereby improving the accuracy of PRECiSA

    On Sound Relative Error Bounds for Floating-Point Arithmetic

    Full text link
    State-of-the-art static analysis tools for verifying finite-precision code compute worst-case absolute error bounds on numerical errors. These are, however, often not a good estimate of accuracy as they do not take into account the magnitude of the computed values. Relative errors, which compute errors relative to the value's magnitude, are thus preferable. While today's tools do report relative error bounds, these are merely computed via absolute errors and thus not necessarily tight or more informative. Furthermore, whenever the computed value is close to zero on part of the domain, the tools do not report any relative error estimate at all. Surprisingly, the quality of relative error bounds computed by today's tools has not been systematically studied or reported to date. In this paper, we investigate how state-of-the-art static techniques for computing sound absolute error bounds can be used, extended and combined for the computation of relative errors. Our experiments on a standard benchmark set show that computing relative errors directly, as opposed to via absolute errors, is often beneficial and can provide error estimates up to six orders of magnitude tighter, i.e. more accurate. We also show that interval subdivision, another commonly used technique to reduce over-approximations, has less benefit when computing relative errors directly, but it can help to alleviate the effects of the inherent issue of relative error estimates close to zero

    A Semantics for Approximate Program Transformations

    Full text link
    An approximate program transformation is a transformation that can change the semantics of a program within a specified empirical error bound. Such transformations have wide applications: they can decrease computation time, power consumption, and memory usage, and can, in some cases, allow implementations of incomputable operations. Correctness proofs of approximate program transformations are by definition quantitative. Unfortunately, unlike with standard program transformations, there is as of yet no modular way to prove correctness of an approximate transformation itself. Error bounds must be proved for each transformed program individually, and must be re-proved each time a program is modified or a different set of approximations are applied. In this paper, we give a semantics that enables quantitative reasoning about a large class of approximate program transformations in a local, composable way. Our semantics is based on a notion of distance between programs that defines what it means for an approximate transformation to be correct up to an error bound. The key insight is that distances between programs cannot in general be formulated in terms of metric spaces and real numbers. Instead, our semantics admits natural notions of distance for each type construct; for example, numbers are used as distances for numerical data, functions are used as distances for functional data, an polymorphic lambda-terms are used as distances for polymorphic data. We then show how our semantics applies to two example approximations: replacing reals with floating-point numbers, and loop perforation
    • …
    corecore