2,173 research outputs found

    Output Reachable Set Estimation and Verification for Multi-Layer Neural Networks

    Get PDF
    In this paper, the output reachable estimation and safety verification problems for multi-layer perceptron neural networks are addressed. First, a conception called maximum sensitivity in introduced and, for a class of multi-layer perceptrons whose activation functions are monotonic functions, the maximum sensitivity can be computed via solving convex optimization problems. Then, using a simulation-based method, the output reachable set estimation problem for neural networks is formulated into a chain of optimization problems. Finally, an automated safety verification is developed based on the output reachable set estimation result. An application to the safety verification for a robotic arm model with two joints is presented to show the effectiveness of proposed approaches.Comment: 8 pages, 9 figures, to appear in TNNL

    LSTM Neural Networks: Input to State Stability and Probabilistic Safety Verification

    Get PDF
    The goal of this paper is to analyze Long Short Term Memory (LSTM) neural networks from a dynamical system perspective. The classical recursive equations describing the evolution of LSTM can be recast in state space form, resulting in a time-invariant nonlinear dynamical system. A sufficient condition guaranteeing the Input-to-State (ISS) stability property of this class of systems is provided. The ISS property entails the boundedness of the output reachable set of the LSTM. In light of this result, a novel approach for the safety verification of the network, based on the Scenario Approach, is devised. The proposed method is eventually tested on a pH neutralization process.Comment: Accepted for Learning for dynamics & control (L4DC) 202

    Safety Verification for Neural Networks Based on Set-boundary Analysis

    Full text link
    Neural networks (NNs) are increasingly applied in safety-critical systems such as autonomous vehicles. However, they are fragile and are often ill-behaved. Consequently, their behaviors should undergo rigorous guarantees before deployment in practice. In this paper we propose a set-boundary reachability method to investigate the safety verification problem of NNs from a topological perspective. Given an NN with an input set and a safe set, the safety verification problem is to determine whether all outputs of the NN resulting from the input set fall within the safe set. In our method, the homeomorphism property of NNs is mainly exploited, which establishes a relationship mapping boundaries to boundaries. The exploitation of this property facilitates reachability computations via extracting subsets of the input set rather than the entire input set, thus controlling the wrapping effect in reachability analysis and facilitating the reduction of computation burdens for safety verification. The homeomorphism property exists in some widely used NNs such as invertible NNs. Notable representations are invertible residual networks (i-ResNets) and Neural ordinary differential equations (Neural ODEs). For these NNs, our set-boundary reachability method only needs to perform reachability analysis on the boundary of the input set. For NNs which do not feature this property with respect to the input set, we explore subsets of the input set for establishing the local homeomorphism property, and then abandon these subsets for reachability computations. Finally, some examples demonstrate the performance of the proposed method.Comment: 19 pages, 7 figure
    • …
    corecore