237 research outputs found
A Logical Product Approach to Zonotope Intersection
We define and study a new abstract domain which is a fine-grained combination
of zonotopes with polyhedric domains such as the interval, octagon, linear
templates or polyhedron domain. While abstract transfer functions are still
rather inexpensive and accurate even for interpreting non-linear computations,
we are able to also interpret tests (i.e. intersections) efficiently. This
fixes a known drawback of zonotopic methods, as used for reachability analysis
for hybrid sys- tems as well as for invariant generation in abstract
interpretation: intersection of zonotopes are not always zonotopes, and there
is not even a best zonotopic over-approximation of the intersection. We
describe some examples and an im- plementation of our method in the APRON
library, and discuss some further in- teresting combinations of zonotopes with
non-linear or non-convex domains such as quadratic templates and maxplus
polyhedra
Provably Correct Floating-Point Implementation of a Point-In-Polygon Algorithm
The problem of determining whether or not a point lies inside a given polygon occurs in many applications. In air traffic management concepts, a correct solution to the point-in-polygon problem is critical to geofencing systems for Unmanned Aerial Vehicles and in weather avoidance applications. Many mathematical methods can be used to solve the point-in-polygon problem. Unfortunately, a straightforward floating- point implementation of these methods can lead to incorrect results due to round-off errors. In particular, these errors may cause the control flow of the program to diverge with respect to the ideal real-number algorithm. This divergence potentially results in an incorrect point-in- polygon determination even when the point is far from the edges of the polygon. This paper presents a provably correct implementation of a point-in-polygon method that is based on the computation of the winding number. This implementation is mechanically generated from a source- to-source transformation of the ideal real-number specification of the algorithm. The correctness of this implementation is formally verified within the Frama-C analyzer, where the proof obligations are discharged using the Prototype Verification System (PVS)
A Verified Certificate Checker for Finite-Precision Error Bounds in Coq and HOL4
Being able to soundly estimate roundoff errors of finite-precision
computations is important for many applications in embedded systems and
scientific computing. Due to the discrepancy between continuous reals and
discrete finite-precision values, automated static analysis tools are highly
valuable to estimate roundoff errors. The results, however, are only as correct
as the implementations of the static analysis tools. This paper presents a
formally verified and modular tool which fully automatically checks the
correctness of finite-precision roundoff error bounds encoded in a certificate.
We present implementations of certificate generation and checking for both Coq
and HOL4 and evaluate it on a number of examples from the literature. The
experiments use both in-logic evaluation of Coq and HOL4, and execution of
extracted code outside of the logics: we benchmark Coq extracted unverified
OCaml code and a CakeML-generated verified binary
On Sound Relative Error Bounds for Floating-Point Arithmetic
State-of-the-art static analysis tools for verifying finite-precision code
compute worst-case absolute error bounds on numerical errors. These are,
however, often not a good estimate of accuracy as they do not take into account
the magnitude of the computed values. Relative errors, which compute errors
relative to the value's magnitude, are thus preferable. While today's tools do
report relative error bounds, these are merely computed via absolute errors and
thus not necessarily tight or more informative. Furthermore, whenever the
computed value is close to zero on part of the domain, the tools do not report
any relative error estimate at all. Surprisingly, the quality of relative error
bounds computed by today's tools has not been systematically studied or
reported to date. In this paper, we investigate how state-of-the-art static
techniques for computing sound absolute error bounds can be used, extended and
combined for the computation of relative errors. Our experiments on a standard
benchmark set show that computing relative errors directly, as opposed to via
absolute errors, is often beneficial and can provide error estimates up to six
orders of magnitude tighter, i.e. more accurate. We also show that interval
subdivision, another commonly used technique to reduce over-approximations, has
less benefit when computing relative errors directly, but it can help to
alleviate the effects of the inherent issue of relative error estimates close
to zero
Quadratic Zonotopes:An extension of Zonotopes to Quadratic Arithmetics
Affine forms are a common way to represent convex sets of using
a base of error terms . Quadratic forms are an
extension of affine forms enabling the use of quadratic error terms .
In static analysis, the zonotope domain, a relational abstract domain based
on affine forms has been used in a wide set of settings, e.g. set-based
simulation for hybrid systems, or floating point analysis, providing relational
abstraction of functions with a cost linear in the number of errors terms.
In this paper, we propose a quadratic version of zonotopes. We also present a
new algorithm based on semi-definite programming to project a quadratic
zonotope, and therefore quadratic forms, to intervals. All presented material
has been implemented and applied on representative examples.Comment: 17 pages, 5 figures, 1 tabl
Abstract Fixpoint Computations with Numerical Acceleration Methods
Static analysis by abstract interpretation aims at automatically proving
properties of computer programs. To do this, an over-approximation of program
semantics, defined as the least fixpoint of a system of semantic equations,
must be computed. To enforce the convergence of this computation, widening
operator is used but it may lead to coarse results. We propose a new method to
accelerate the computation of this fixpoint by using standard techniques of
numerical analysis. Our goal is to automatically and dynamically adapt the
widening operator in order to maintain precision
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA
- …