652 research outputs found
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Deep neural networks have emerged as a widely used and effective means for
tackling complex, real-world problems. However, a major obstacle in applying
them to safety-critical systems is the great difficulty in providing formal
guarantees about their behavior. We present a novel, scalable, and efficient
technique for verifying properties of deep neural networks (or providing
counter-examples). The technique is based on the simplex method, extended to
handle the non-convex Rectified Linear Unit (ReLU) activation function, which
is a crucial ingredient in many modern neural networks. The verification
procedure tackles neural networks as a whole, without making any simplifying
assumptions. We evaluated our technique on a prototype deep neural network
implementation of the next-generation airborne collision avoidance system for
unmanned aircraft (ACAS Xu). Results show that our technique can successfully
prove properties of networks that are an order of magnitude larger than the
largest networks verified using existing methods.Comment: This is the extended version of a paper with the same title that
appeared at CAV 201
Formal Verification of Input-Output Mappings of Tree Ensembles
Recent advances in machine learning and artificial intelligence are now being
considered in safety-critical autonomous systems where software defects may
cause severe harm to humans and the environment. Design organizations in these
domains are currently unable to provide convincing arguments that their systems
are safe to operate when machine learning algorithms are used to implement
their software.
In this paper, we present an efficient method to extract equivalence classes
from decision trees and tree ensembles, and to formally verify that their
input-output mappings comply with requirements. The idea is that, given that
safety requirements can be traced to desirable properties on system
input-output patterns, we can use positive verification outcomes in safety
arguments. This paper presents the implementation of the method in the tool
VoTE (Verifier of Tree Ensembles), and evaluates its scalability on two case
studies presented in current literature.
We demonstrate that our method is practical for tree ensembles trained on
low-dimensional data with up to 25 decision trees and tree depths of up to 20.
Our work also studies the limitations of the method with high-dimensional data
and preliminarily investigates the trade-off between large number of trees and
time taken for verification
Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems
As neural networks (NNs) become more prevalent in safety-critical
applications such as control of vehicles, there is a growing need to certify
that systems with NN components are safe. This paper presents a set of backward
reachability approaches for safety certification of neural feedback loops
(NFLs), i.e., closed-loop systems with NN control policies. While backward
reachability strategies have been developed for systems without NN components,
the nonlinearities in NN activation functions and general noninvertibility of
NN weight matrices make backward reachability for NFLs a challenging problem.
To avoid the difficulties associated with propagating sets backward through
NNs, we introduce a framework that leverages standard forward NN analysis tools
to efficiently find over-approximations to backprojection (BP) sets, i.e., sets
of states for which an NN policy will lead a system to a given target set. We
present frameworks for calculating BP over approximations for both linear and
nonlinear systems with control policies represented by feedforward NNs and
propose computationally efficient strategies. We use numerical results from a
variety of models to showcase the proposed algorithms, including a
demonstration of safety certification for a 6D system.Comment: 17 pages, 15 figures. Journal extension of arXiv:2204.0831
A Review of Formal Methods applied to Machine Learning
We review state-of-the-art formal methods applied to the emerging field of
the verification of machine learning systems. Formal methods can provide
rigorous correctness guarantees on hardware and software systems. Thanks to the
availability of mature tools, their use is well established in the industry,
and in particular to check safety-critical applications as they undergo a
stringent certification process. As machine learning is becoming more popular,
machine-learned components are now considered for inclusion in critical
systems. This raises the question of their safety and their verification. Yet,
established formal methods are limited to classic, i.e. non machine-learned
software. Applying formal methods to verify systems that include machine
learning has only been considered recently and poses novel challenges in
soundness, precision, and scalability.
We first recall established formal methods and their current use in an
exemplar safety-critical field, avionic software, with a focus on abstract
interpretation based techniques as they provide a high level of scalability.
This provides a golden standard and sets high expectations for machine learning
verification. We then provide a comprehensive and detailed review of the formal
methods developed so far for machine learning, highlighting their strengths and
limitations. The large majority of them verify trained neural networks and
employ either SMT, optimization, or abstract interpretation techniques. We also
discuss methods for support vector machines and decision tree ensembles, as
well as methods targeting training and data preparation, which are critical but
often neglected aspects of machine learning. Finally, we offer perspectives for
future research directions towards the formal verification of machine learning
systems
Backward Reachability Analysis for Neural Feedback Loops
The increasing prevalence of neural networks (NNs) in safety-critical
applications calls for methods to certify their behavior and guarantee safety.
This paper presents a backward reachability approach for safety verification of
neural feedback loops (NFLs), i.e., closed-loop systems with NN control
policies. While recent works have focused on forward reachability as a strategy
for safety certification of NFLs, backward reachability offers advantages over
the forward strategy, particularly in obstacle avoidance scenarios. Prior works
have developed techniques for backward reachability analysis for systems
without NNs, but the presence of NNs in the feedback loop presents a unique set
of problems due to the nonlinearities in their activation functions and because
NN models are generally not invertible. To overcome these challenges, we use
existing forward NN analysis tools to find affine bounds on the control inputs
and solve a series of linear programs (LPs) to efficiently find an
approximation of the backprojection (BP) set, i.e., the set of states for which
the NN control policy will drive the system to a given target set. We present
an algorithm to iteratively find BP set estimates over a given time horizon and
demonstrate the ability to reduce conservativeness in the BP set estimates by
up to 88% with low additional computational cost. We use numerical results from
a double integrator model to verify the efficacy of these algorithms and
demonstrate the ability to certify safety for a linearized ground robot model
in a collision avoidance scenario where forward reachability fails.Comment: 8 pages, 5 figure
Provable Preimage Under-Approximation for Neural Networks (Full Version)
Neural network verification mainly focuses on local robustness properties,
which can be checked by bounding the image (set of outputs) of a given input
set. However, often it is important to know whether a given property holds
globally for the input domain, and if not then for what proportion of the input
the property is true. To analyze such properties requires computing preimage
abstractions of neural networks. In this work, we propose an efficient anytime
algorithm for generating symbolic under-approximations of the preimage of any
polyhedron output set for neural networks. Our algorithm combines a novel
technique for cheaply computing polytope preimage under-approximations using
linear relaxation, with a carefully-designed refinement procedure that
iteratively partitions the input region into subregions using input and ReLU
splitting in order to improve the approximation. Empirically, we validate the
efficacy of our method across a range of domains, including a high-dimensional
MNIST classification task beyond the reach of existing preimage computation
methods. Finally, as use cases, we showcase the application to quantitative
verification and robustness analysis. We present a sound and complete algorithm
for the former, which exploits our disjoint union of polytopes representation
to provide formal guarantees. For the latter, we find that our method can
provide useful quantitative information even when standard verifiers cannot
verify a robustness property
- …