61,050 research outputs found
Detecting Floating-Point Errors via Atomic Conditions
This paper tackles the important, difficult problem of detecting program inputs that trigger large floating-point errors in numerical code. It introduces a novel, principled dynamic analysis that leverages the mathematically rigorously analyzed condition numbers for atomic numerical operations, which we call atomic conditions, to effectively guide the search for large floating-point errors. Compared with existing approaches, our work based on atomic conditions has several distinctive benefits: (1) it does not rely on high-precision implementations to act as approximate oracles, which are difficult to obtain in general and computationally costly; and (2) atomic conditions provide accurate, modular search guidance. These benefits in combination lead to a highly effective approach that detects more significant errors in real-world code (e.g., widely-used numerical library functions) and achieves several orders of speedups over the state-of-the-art, thus making error analysis significantly more practical. We expect the methodology and principles behind our approach to benefit other floating-point program analysis tasks such as debugging, repair and synthesis. To facilitate the reproduction of our work, we have made our implementation, evaluation data and results publicly available on GitHub at https://github.com/FP-Analysis/atomic-condition.ISSN:2475-142
SWATI: Synthesizing Wordlengths Automatically Using Testing and Induction
In this paper, we present an automated technique SWATI: Synthesizing
Wordlengths Automatically Using Testing and Induction, which uses a combination
of Nelder-Mead optimization based testing, and induction from examples to
automatically synthesize optimal fixedpoint implementation of numerical
routines. The design of numerical software is commonly done using
floating-point arithmetic in design-environments such as Matlab. However, these
designs are often implemented using fixed-point arithmetic for speed and
efficiency reasons especially in embedded systems. The fixed-point
implementation reduces implementation cost, provides better performance, and
reduces power consumption. The conversion from floating-point designs to
fixed-point code is subject to two opposing constraints: (i) the word-width of
fixed-point types must be minimized, and (ii) the outputs of the fixed-point
program must be accurate. In this paper, we propose a new solution to this
problem. Our technique takes the floating-point program, specified accuracy and
an implementation cost model and provides the fixed-point program with
specified accuracy and optimal implementation cost. We demonstrate the
effectiveness of our approach on a set of examples from the domain of automated
control, robotics and digital signal processing
On Sound Relative Error Bounds for Floating-Point Arithmetic
State-of-the-art static analysis tools for verifying finite-precision code
compute worst-case absolute error bounds on numerical errors. These are,
however, often not a good estimate of accuracy as they do not take into account
the magnitude of the computed values. Relative errors, which compute errors
relative to the value's magnitude, are thus preferable. While today's tools do
report relative error bounds, these are merely computed via absolute errors and
thus not necessarily tight or more informative. Furthermore, whenever the
computed value is close to zero on part of the domain, the tools do not report
any relative error estimate at all. Surprisingly, the quality of relative error
bounds computed by today's tools has not been systematically studied or
reported to date. In this paper, we investigate how state-of-the-art static
techniques for computing sound absolute error bounds can be used, extended and
combined for the computation of relative errors. Our experiments on a standard
benchmark set show that computing relative errors directly, as opposed to via
absolute errors, is often beneficial and can provide error estimates up to six
orders of magnitude tighter, i.e. more accurate. We also show that interval
subdivision, another commonly used technique to reduce over-approximations, has
less benefit when computing relative errors directly, but it can help to
alleviate the effects of the inherent issue of relative error estimates close
to zero
- …