Transfer Assurance for Machine Learning in Autonomous Systems

Abstract

This paper introduces the concept of transfer assurance for Machine Learning (ML) components used as part of an autonomous system (AS). In previous work we developed the first approach for assuring the safety of ML components such that a compelling safety case can be created for their safe deployment. During operation it may be necessary to update an ML component by re-training the model using new or updated development data. If model re-training is required post-deployment, the safety case that was created for the ML component may no longer be valid, since a new model has been created that can no longer be assured to meet its safety requirements. In particular, the nature of machine learnt components means that one may not be able to predict how even small changes in the development data may affect the model and its performance. As a result, current practice would require that a full assurance assessment is undertaken for the re-learned model, and that a new safety case is created. Given the desirability of updating ML components during operation, we see it as imperative that the assurance process become more proportionate to the size of the change that is made to the model, whilst ensuring that assurance can still be demonstrated. Retraining ML components is known to be a costly and complex process and as such techniques such as transfer learning have been developed which aim to reduce this burden through incremental development. Approaches such as transfer learning provide an inspiration for how the challenge of efficiently assuring updated models could be addressed through understanding which aspects of a model may have been affected by changes to the development data. We refer to this as transfer assurance, where parts of the assurance case for an ML component can remain fixed whilst other parts are re-assessed

    Similar works