7,294 research outputs found
Workshop on Verification and Theorem Proving for Continuous Systems (NetCA Workshop 2005)
Oxford, UK, 26 August 200
Quadratic Zonotopes:An extension of Zonotopes to Quadratic Arithmetics
Affine forms are a common way to represent convex sets of using
a base of error terms . Quadratic forms are an
extension of affine forms enabling the use of quadratic error terms .
In static analysis, the zonotope domain, a relational abstract domain based
on affine forms has been used in a wide set of settings, e.g. set-based
simulation for hybrid systems, or floating point analysis, providing relational
abstraction of functions with a cost linear in the number of errors terms.
In this paper, we propose a quadratic version of zonotopes. We also present a
new algorithm based on semi-definite programming to project a quadratic
zonotope, and therefore quadratic forms, to intervals. All presented material
has been implemented and applied on representative examples.Comment: 17 pages, 5 figures, 1 tabl
A robust error estimator and a residual-free error indicator for reduced basis methods
The Reduced Basis Method (RBM) is a rigorous model reduction approach for
solving parametrized partial differential equations. It identifies a
low-dimensional subspace for approximation of the parametric solution manifold
that is embedded in high-dimensional space. A reduced order model is
subsequently constructed in this subspace. RBM relies on residual-based error
indicators or {\em a posteriori} error bounds to guide construction of the
reduced solution subspace, to serve as a stopping criteria, and to certify the
resulting surrogate solutions. Unfortunately, it is well-known that the
standard algorithm for residual norm computation suffers from premature
stagnation at the level of the square root of machine precision.
In this paper, we develop two alternatives to the standard offline phase of
reduced basis algorithms. First, we design a robust strategy for computation of
residual error indicators that allows RBM algorithms to enrich the solution
subspace with accuracy beyond root machine precision. Secondly, we propose a
new error indicator based on the Lebesgue function in interpolation theory.
This error indicator does not require computation of residual norms, and
instead only requires the ability to compute the RBM solution. This
residual-free indicator is rigorous in that it bounds the error committed by
the RBM approximation, but up to an uncomputable multiplicative constant.
Because of this, the residual-free indicator is effective in choosing snapshots
during the offline RBM phase, but cannot currently be used to certify error
that the approximation commits. However, it circumvents the need for \textit{a
posteriori} analysis of numerical methods, and therefore can be effective on
problems where such a rigorous estimate is hard to derive
Efficient Solving of Quantified Inequality Constraints over the Real Numbers
Let a quantified inequality constraint over the reals be a formula in the
first-order predicate language over the structure of the real numbers, where
the allowed predicate symbols are and . Solving such constraints is
an undecidable problem when allowing function symbols such or . In
the paper we give an algorithm that terminates with a solution for all, except
for very special, pathological inputs. We ensure the practical efficiency of
this algorithm by employing constraint programming techniques
On Sound Relative Error Bounds for Floating-Point Arithmetic
State-of-the-art static analysis tools for verifying finite-precision code
compute worst-case absolute error bounds on numerical errors. These are,
however, often not a good estimate of accuracy as they do not take into account
the magnitude of the computed values. Relative errors, which compute errors
relative to the value's magnitude, are thus preferable. While today's tools do
report relative error bounds, these are merely computed via absolute errors and
thus not necessarily tight or more informative. Furthermore, whenever the
computed value is close to zero on part of the domain, the tools do not report
any relative error estimate at all. Surprisingly, the quality of relative error
bounds computed by today's tools has not been systematically studied or
reported to date. In this paper, we investigate how state-of-the-art static
techniques for computing sound absolute error bounds can be used, extended and
combined for the computation of relative errors. Our experiments on a standard
benchmark set show that computing relative errors directly, as opposed to via
absolute errors, is often beneficial and can provide error estimates up to six
orders of magnitude tighter, i.e. more accurate. We also show that interval
subdivision, another commonly used technique to reduce over-approximations, has
less benefit when computing relative errors directly, but it can help to
alleviate the effects of the inherent issue of relative error estimates close
to zero
- …