Multi-Model Specifications and their application to Classification Systems

Abstract

Many safety-critical systems are required to have their correctness validated prior to deployment. Such validation is typically performed using models of the run-time behaviour that the system is expected to exhibit and experience during run-time. However, these systems may be subject to different requirements under different circumstances; also, there may be multiple stakeholders involved, each with a somewhat different perspective on correctness. We examine the use of a multi-model framework based on assumptions (Pre and Rely conditions) and obligations (Post and Guarantee conditions) to represent the workload and resource related needs of complex AI system components such as DNN classifiers. We identify three kinds of multi-models that are of particular interest: Independent, Integrated and Hierarchical. All the individual models comprising an independent multi-model must remain valid at all times during run-time; at least one of the models comprising an integrated multi-model must always be valid. With hierarchical multi-models all models are initially valid but the component's behaviour may gracefully degrade through a series of models with successively weaker assumptions and commitments (we show that Mixed-Criticality Systems, widely studied in the real-time computing community, are particularly well-suited for representation via hierarchical multi-models). We explain how this modelling framework is intended to be used, and present algorithms for determining the worst-case timing behaviour of systems that are specified using multi-models

    Similar works