20 research outputs found
Quadratic Zonotopes:An extension of Zonotopes to Quadratic Arithmetics
Affine forms are a common way to represent convex sets of using
a base of error terms . Quadratic forms are an
extension of affine forms enabling the use of quadratic error terms .
In static analysis, the zonotope domain, a relational abstract domain based
on affine forms has been used in a wide set of settings, e.g. set-based
simulation for hybrid systems, or floating point analysis, providing relational
abstraction of functions with a cost linear in the number of errors terms.
In this paper, we propose a quadratic version of zonotopes. We also present a
new algorithm based on semi-definite programming to project a quadratic
zonotope, and therefore quadratic forms, to intervals. All presented material
has been implemented and applied on representative examples.Comment: 17 pages, 5 figures, 1 tabl
A Logical Product Approach to Zonotope Intersection
We define and study a new abstract domain which is a fine-grained combination
of zonotopes with polyhedric domains such as the interval, octagon, linear
templates or polyhedron domain. While abstract transfer functions are still
rather inexpensive and accurate even for interpreting non-linear computations,
we are able to also interpret tests (i.e. intersections) efficiently. This
fixes a known drawback of zonotopic methods, as used for reachability analysis
for hybrid sys- tems as well as for invariant generation in abstract
interpretation: intersection of zonotopes are not always zonotopes, and there
is not even a best zonotopic over-approximation of the intersection. We
describe some examples and an im- plementation of our method in the APRON
library, and discuss some further in- teresting combinations of zonotopes with
non-linear or non-convex domains such as quadratic templates and maxplus
polyhedra
Functional sets with typed symbols: Framework and mixed Polynotopes for hybrid nonlinear reachability and filtering
Verification and synthesis of Cyber-Physical Systems (CPS) are challenging
and still raise numerous issues so far. In this paper, an original framework
with mixed sets defined as function images of symbol type domains is first
proposed. Syntax and semantics are explicitly distinguished. Then, both
continuous (interval) and discrete (signed, boolean) symbol types are used to
model dependencies through linear and polynomial functions, so leading to mixed
zonotopic and polynotopic sets. Polynotopes extend sparse polynomial zonotopes
with typed symbols. Polynotopes can both propagate a mixed encoding of
intervals and describe the behavior of logic gates. A functional completeness
result is given, as well as an inclusion method for elementary nonlinear and
switching functions. A Polynotopic Kalman Filter (PKF) is then proposed as a
hybrid nonlinear extension of Zonotopic Kalman Filters (ZKF). Bridges with a
stochastic uncertainty paradigm are outlined. Finally, several discrete,
continuous and hybrid numerical examples including comparisons illustrate the
effectiveness of the theoretical results.Comment: 21 pages, 8 figure
Robustness Verification of Support Vector Machines
We study the problem of formally verifying the robustness to adversarial
examples of support vector machines (SVMs), a major machine learning model for
classification and regression tasks. Following a recent stream of works on
formal robustness verification of (deep) neural networks, our approach relies
on a sound abstract version of a given SVM classifier to be used for checking
its robustness. This methodology is parametric on a given numerical abstraction
of real values and, analogously to the case of neural networks, needs neither
abstract least upper bounds nor widening operators on this abstraction. The
standard interval domain provides a simple instantiation of our abstraction
technique, which is enhanced with the domain of reduced affine forms, which is
an efficient abstraction of the zonotope abstract domain. This robustness
verification technique has been fully implemented and experimentally evaluated
on SVMs based on linear and nonlinear (polynomial and radial basis function)
kernels, which have been trained on the popular MNIST dataset of images and on
the recent and more challenging Fashion-MNIST dataset. The experimental results
of our prototype SVM robustness verifier appear to be encouraging: this
automated verification is fast, scalable and shows significantly high
percentages of provable robustness on the test set of MNIST, in particular
compared to the analogous provable robustness of neural networks
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA
Open- and Closed-Loop Neural Network Verification using Polynomial Zonotopes
We present a novel approach to efficiently compute tight non-convex
enclosures of the image through neural networks with ReLU, sigmoid, or
hyperbolic tangent activation functions. In particular, we abstract the
input-output relation of each neuron by a polynomial approximation, which is
evaluated in a set-based manner using polynomial zonotopes. While our approach
can also can be beneficial for open-loop neural network verification, our main
application is reachability analysis of neural network controlled systems,
where polynomial zonotopes are able to capture the non-convexity caused by the
neural network as well as the system dynamics. This results in a superior
performance compared to other methods, as we demonstrate on various benchmarks
Static Analysis of Programs with Imprecise Probabilistic Inputs
International audienceHaving a precise yet sound abstraction of the inputs of numerical programs is important to analyze their behavior. For many programs, these inputs are probabilistic, but the actual distribution used is only partially known. We present a static analysis framework for reasoning about programs with inputs given as imprecise probabilities: we define a collecting semantics based on the notion of previsions and an abstract semantics based on an extension of Dempster-Shafer structures. We prove the correctness of our approach and show on some realistic examples the kind of invariants we are able to infer.Il est important de disposer d'une abstraction précise mais correcte des entrées d'un programme numérique pour analyser ses comportements. Pour de nombreux programmes, ces entrées sont probabilistes, mais la distribution réellement utilisée n'est connue que partiellement. Nous présentons un cadre d'analyse statique permettant le raisonnement sur des programmes dont les entrées sont données sous forme de probabilités imprécises: nous définissons une sémantique collectrice fondée sur la notion de prévisions et une sémantique abstraite fondée sur une extension des structures de Dempster-Shafer. Nous démontrons la correction de notre approche et montrons sur des exemples réalistes le genre d'invariants que nous sommes capables d'inférer
The Creation of Puffin, the Automatic Uncertainty Compiler
An uncertainty compiler is a tool that automatically translates original computer source code lacking explicit uncertainty analysis into code containing appropriate uncertainty representations and uncertainty propagation algorithms. We have developed an prototype uncertainty compiler along with an associated object-oriented uncertainty language in the form of a stand-alone Python library. It handles the specifications of input uncertainties and inserts calls to intrusive uncertainty quantification algorithms in the library. The uncertainty compiler can apply intrusive uncertainty propagation methods to codes or parts of codes and therefore more comprehensively and flexibly address both epistemic and aleatory uncertainties
Towards an automatic uncertainty compiler
An uncertainty compiler is a tool that automatically translates original computer source code lacking explicit uncertainty quantification into code containing appropriate uncertainty representations and uncertainty propagation algorithms. It handles the specifications of input uncertainties, and inserts calls to intrusive uncertainty quantification algorithms. In theory, one could create an uncertainty compiler for any scientific programming language. The uncertainty compiler can apply intrusive uncertainty propagation methods to codes or parts of codes and, therefore, more comprehensively and flexibly address epistemic and aleatory uncertainties. This paper explores the concept and the practicalities of creating such a compiler