337,089 research outputs found
Assume-Guarantee Testing
Verification techniques for component-based systems should ideally be able to predict properties of the assembled system through analysis of individual components before assembly. This work introduces such a modular technique in the context of testing. Assume-guarantee testing relies on the (automated) decomposition of key system-level requirements into local component requirements at design time. Developers can verify the local requirements by checking components in isolation; failed checks may indicate violations of system requirements, while valid traces from different components compose via the assume-guarantee proof rule to potentially provide system coverage. These local requirements also form the foundation of a technique for efficient predictive testing of assembled systems: given a correct system run, this technique can predict violations by alternative system runs without constructing those runs. We discuss the application of our approach to testing a multi-threaded NASA application, where we treat threads as components
Work-in-progress Assume-guarantee reasoning with ioco
This paper presents a combination between the assume-guarantee paradigm and the testing relation ioco. The assume-guarantee paradigm is a ”divide and conquer” technique that decomposes the verification of a system into smaller tasks that involve the verification of its components. The principal aspect of assume-guarantee reasoning is to consider each component separately, while taking into account assumptions about the context of the component. The testing relation ioco is a formal conformance relation for model-based testing that works on labeled transition systems. Our main result shows that, with certain restrictions, assume-guarantee reasoning can be applied in the context of ioco. This enables testing ioco-conformance of a system by testing its components separately
Assumption Generation for the Verification of Learning-Enabled Autonomous Systems
Providing safety guarantees for autonomous systems is difficult as these
systems operate in complex environments that require the use of
learning-enabled components, such as deep neural networks (DNNs) for visual
perception. DNNs are hard to analyze due to their size (they can have thousands
or millions of parameters), lack of formal specifications (DNNs are typically
learnt from labeled data, in the absence of any formal requirements), and
sensitivity to small changes in the environment. We present an assume-guarantee
style compositional approach for the formal verification of system-level safety
properties of such autonomous systems. Our insight is that we can analyze the
system in the absence of the DNN perception components by automatically
synthesizing assumptions on the DNN behaviour that guarantee the satisfaction
of the required safety properties. The synthesized assumptions are the weakest
in the sense that they characterize the output sequences of all the possible
DNNs that, plugged into the autonomous system, guarantee the required safety
properties. The assumptions can be leveraged as run-time monitors over a
deployed DNN to guarantee the safety of the overall system; they can also be
mined to extract local specifications for use during training and testing of
DNNs. We illustrate our approach on a case study taken from the autonomous
airplanes domain that uses a complex DNN for perception
Compositional Verification for Autonomous Systems with Deep Learning Components
As autonomy becomes prevalent in many applications, ranging from
recommendation systems to fully autonomous vehicles, there is an increased need
to provide safety guarantees for such systems. The problem is difficult, as
these are large, complex systems which operate in uncertain environments,
requiring data-driven machine-learning components. However, learning techniques
such as Deep Neural Networks, widely used today, are inherently unpredictable
and lack the theoretical foundations to provide strong assurance guarantees. We
present a compositional approach for the scalable, formal verification of
autonomous systems that contain Deep Neural Network components. The approach
uses assume-guarantee reasoning whereby {\em contracts}, encoding the
input-output behavior of individual components, allow the designer to model and
incorporate the behavior of the learning-enabled components working
side-by-side with the other components. We illustrate the approach on an
example taken from the autonomous vehicles domain
- …