12,880 research outputs found
Formal Verification of Neural Network Controlled Autonomous Systems
In this paper, we consider the problem of formally verifying the safety of an
autonomous robot equipped with a Neural Network (NN) controller that processes
LiDAR images to produce control actions. Given a workspace that is
characterized by a set of polytopic obstacles, our objective is to compute the
set of safe initial conditions such that a robot trajectory starting from these
initial conditions is guaranteed to avoid the obstacles. Our approach is to
construct a finite state abstraction of the system and use standard
reachability analysis over the finite state abstraction to compute the set of
the safe initial states. The first technical problem in computing the finite
state abstraction is to mathematically model the imaging function that maps the
robot position to the LiDAR image. To that end, we introduce the notion of
imaging-adapted sets as partitions of the workspace in which the imaging
function is guaranteed to be affine. We develop a polynomial-time algorithm to
partition the workspace into imaging-adapted sets along with computing the
corresponding affine imaging functions. Given this workspace partitioning, a
discrete-time linear dynamics of the robot, and a pre-trained NN controller
with Rectified Linear Unit (ReLU) nonlinearity, the second technical challenge
is to analyze the behavior of the neural network. To that end, we utilize a
Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible
segments of different ReLUs. SMC solvers then use a Boolean satisfiability
solver and a convex programming solver and decompose the problem into smaller
subproblems. To accelerate this process, we develop a pre-processing algorithm
that could rapidly prune the space feasible ReLU segments. Finally, we
demonstrate the efficiency of the proposed algorithms using numerical
simulations with increasing complexity of the neural network controller
Compositional Falsification of Cyber-Physical Systems with Machine Learning Components
Cyber-physical systems (CPS), such as automotive systems, are starting to
include sophisticated machine learning (ML) components. Their correctness,
therefore, depends on properties of the inner ML modules. While learning
algorithms aim to generalize from examples, they are only as good as the
examples provided, and recent efforts have shown that they can produce
inconsistent output under small adversarial perturbations. This raises the
question: can the output from learning components can lead to a failure of the
entire CPS? In this work, we address this question by formulating it as a
problem of falsifying signal temporal logic (STL) specifications for CPS with
ML components. We propose a compositional falsification framework where a
temporal logic falsifier and a machine learning analyzer cooperate with the aim
of finding falsifying executions of the considered model. The efficacy of the
proposed technique is shown on an automatic emergency braking system model with
a perception component based on deep neural networks
Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1
This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines
- …