8,193 research outputs found
Forward Invariance in Neural Network Controlled Systems
We present a framework based on interval analysis and monotone systems theory
to certify and search for forward invariant sets in nonlinear systems with
neural network controllers. The framework (i) constructs localized first-order
inclusion functions for the closed-loop system using Jacobian bounds and
existing neural network verification tools; (ii) builds a dynamical embedding
system where its evaluation along a single trajectory directly corresponds with
a nested family of hyper-rectangles provably converging to an attractive set of
the original system; (iii) utilizes linear transformations to build families of
nested paralleletopes with the same properties. The framework is automated in
Python using our interval analysis toolbox , in
conjunction with the symbolic arithmetic toolbox , demonstrated
on an -dimensional leader-follower system
Verisig: verifying safety properties of hybrid systems with neural network controllers
This paper presents Verisig, a hybrid system approach to verifying safety properties of closed-loop systems using neural networks as controllers. We focus on sigmoid-based networks and exploit the fact that the sigmoid is the solution to a quadratic differential equation, which allows us to transform the neural network into an equivalent hybrid system. By composing the network’s hybrid system with the plant’s, we transform the problem into a hybrid system verification problem which can be solved using state-of-theart reachability tools. We show that reachability is decidable for networks with one hidden layer and decidable for general networks if Schanuel’s conjecture is true. We evaluate the applicability and scalability of Verisig in two case studies, one from reinforcement learning and one in which the neural network is used to approximate a model predictive controller
ARCH-COMP22 category report: Artificial intelligence and neural network control systems (AINNCS) for continuous and hybrid systems plants
This report presents the results of a friendly competition for formal verification of continuous and hybrid systems with artificial intelligence (AI) components. Specifically, machine learning (ML) components in cyber-physical systems (CPS), such as feedforward neural networks used as feedback controllers in closed-loop systems are considered, which is a class of systems classically known as intelligent control systems, or in more modern and specific terms, neural network control systems (NNCS). We more broadly refer to this category as AI and NNCS (AINNCS). The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2022. In the fourth edition of this AINNCS category at ARCH-COMP, four tools have been applied to solve 10 different benchmark problems. There are two new participants: CORA and POLAR, and two previous participants: JuliaReach and NNV. The goal of this report is to be a snapshot of the current landscape of tools and the types of benchmarks for which these tools are suited. The results of this iteration significantly outperform those of any previous year, demonstrating the continuous advancement of this community in the past decade.</jats:p
Evaluating Model Testing and Model Checking for Finding Requirements Violations in Simulink Models
Matlab/Simulink is a development and simulation language that is widely used
by the Cyber-Physical System (CPS) industry to model dynamical systems. There
are two mainstream approaches to verify CPS Simulink models: model testing that
attempts to identify failures in models by executing them for a number of
sampled test inputs, and model checking that attempts to exhaustively check the
correctness of models against some given formal properties. In this paper, we
present an industrial Simulink model benchmark, provide a categorization of
different model types in the benchmark, describe the recurring logical patterns
in the model requirements, and discuss the results of applying model checking
and model testing approaches to identify requirements violations in the
benchmarked models. Based on the results, we discuss the strengths and
weaknesses of model testing and model checking. Our results further suggest
that model checking and model testing are complementary and by combining them,
we can significantly enhance the capabilities of each of these approaches
individually. We conclude by providing guidelines as to how the two approaches
can be best applied together.Comment: 10 pages + 2 page reference
- …