16 research outputs found
A Verified Certificate Checker for Finite-Precision Error Bounds in Coq and HOL4
Being able to soundly estimate roundoff errors of finite-precision
computations is important for many applications in embedded systems and
scientific computing. Due to the discrepancy between continuous reals and
discrete finite-precision values, automated static analysis tools are highly
valuable to estimate roundoff errors. The results, however, are only as correct
as the implementations of the static analysis tools. This paper presents a
formally verified and modular tool which fully automatically checks the
correctness of finite-precision roundoff error bounds encoded in a certificate.
We present implementations of certificate generation and checking for both Coq
and HOL4 and evaluate it on a number of examples from the literature. The
experiments use both in-logic evaluation of Coq and HOL4, and execution of
extracted code outside of the logics: we benchmark Coq extracted unverified
OCaml code and a CakeML-generated verified binary
On Sound Relative Error Bounds for Floating-Point Arithmetic
State-of-the-art static analysis tools for verifying finite-precision code
compute worst-case absolute error bounds on numerical errors. These are,
however, often not a good estimate of accuracy as they do not take into account
the magnitude of the computed values. Relative errors, which compute errors
relative to the value's magnitude, are thus preferable. While today's tools do
report relative error bounds, these are merely computed via absolute errors and
thus not necessarily tight or more informative. Furthermore, whenever the
computed value is close to zero on part of the domain, the tools do not report
any relative error estimate at all. Surprisingly, the quality of relative error
bounds computed by today's tools has not been systematically studied or
reported to date. In this paper, we investigate how state-of-the-art static
techniques for computing sound absolute error bounds can be used, extended and
combined for the computation of relative errors. Our experiments on a standard
benchmark set show that computing relative errors directly, as opposed to via
absolute errors, is often beneficial and can provide error estimates up to six
orders of magnitude tighter, i.e. more accurate. We also show that interval
subdivision, another commonly used technique to reduce over-approximations, has
less benefit when computing relative errors directly, but it can help to
alleviate the effects of the inherent issue of relative error estimates close
to zero
Dépliage de Boucles Versus Précision Numérique
Les calculs en nombres flottants sont intensivement utilisés dans divers domaines, notamment les
systèmes embarqués critiques. En général, les résultats de ces calculs sont perturbés par les erreurs
d’arrondi. Dans un scenario critique, ces erreurs peuvent être accumulées et propagées, générant ainsi
des dommages plus ou moins graves sur le plan humain, matériel, financier, etc. Il est donc souhaitable
d’obtenir les résultats les plus précis possibles lorsque nous utilisons l’arithmétique flottante.
Pour remédier à ce problème, l’outil Salsa [7] permet d’améliorer la précision des calculs en corrigeant
partiellement ces erreurs d’arrondi par une transformation automatique et source à source des
programmes. La principale contribution de ce travail consiste à analyser, à étudier si l’optimisation
par dépliage de boucles améliore plus la précision numérique des calculs dans le programme initial.
À cours terme, on souhaite définir un facteur de dépliage de boucles, c’est à dire, trouver quand est-ce
qu’il est pertinent de déplier la boucle dans le programme
Improving the Numerical Accuracy of High Performance Computing Programs by Process Specialization
In high performance computing, nearly all the implementations and published experiments
use floating-point arithmetic. However, since floating-point numbers are finite approximations
of real numbers, it may result in hazards because of the accumulated errors.
These round-off errors may cause damages whose gravity varies depending on the critical
level of the application. To deal with this issue, we have developed a tool which improves
the numerical accuracy of computations by automatically transforming programs
in a source-to-source manner. Our transformation, relies on static analysis by abstract
interpretation and operates on pieces of code with assignments, conditionals, loops, functions
and arrays. In this article, we apply our techniques to optimize a parallel program
representative of the high performance computing domain. Parallelism introduces new numerical
accuracy problems due to the order of operations in this kind of systems. We are
also interested in studying the compromise between execution time and numerical accuracy
Detecting Floating-Point Errors via Atomic Conditions
This paper tackles the important, difficult problem of detecting program inputs that trigger large floating-point errors in numerical code. It introduces a novel, principled dynamic analysis that leverages the mathematically rigorously analyzed condition numbers for atomic numerical operations, which we call atomic conditions, to effectively guide the search for large floating-point errors. Compared with existing approaches, our work based on atomic conditions has several distinctive benefits: (1) it does not rely on high-precision implementations to act as approximate oracles, which are difficult to obtain in general and computationally costly; and (2) atomic conditions provide accurate, modular search guidance. These benefits in combination lead to a highly effective approach that detects more significant errors in real-world code (e.g., widely-used numerical library functions) and achieves several orders of speedups over the state-of-the-art, thus making error analysis significantly more practical. We expect the methodology and principles behind our approach to benefit other floating-point program analysis tasks such as debugging, repair and synthesis. To facilitate the reproduction of our work, we have made our implementation, evaluation data and results publicly available on GitHub at https://github.com/FP-Analysis/atomic-condition.ISSN:2475-142