2 research outputs found
Towards Quantification of Assurance for Learning-enabled Components
Perception, localization, planning, and control, high-level functions often
organized in a so-called pipeline, are amongst the core building blocks of
modern autonomous (ground, air, and underwater) vehicle architectures. These
functions are increasingly being implemented using learning-enabled components
(LECs), i.e., (software) components leveraging knowledge acquisition and
learning processes such as deep learning. Providing quantified component-level
assurance as part of a wider (dynamic) assurance case can be useful in
supporting both pre-operational approval of LECs (e.g., by regulators), and
runtime hazard mitigation, e.g., using assurance-based failover configurations.
This paper develops a notion of assurance for LECs based on i) identifying the
relevant dependability attributes, and ii) quantifying those attributes and the
associated uncertainty, using probabilistic techniques. We give a practical
grounding for our work using an example from the aviation domain: an autonomous
taxiing capability for an unmanned aircraft system (UAS), focusing on the
application of LECs as sensors in the perception function. We identify the
applicable quantitative measures of assurance, and characterize the associated
uncertainty using a non-parametric Bayesian approach, namely Gaussian process
regression. We additionally discuss the relevance and contribution of LEC
assurance to system-level assurance, the generalizability of our approach, and
the associated challenges.Comment: 8 pp, 4 figures, Appears in the proceedings of EDCC 201
Quantifying Assurance in Learning-enabled Systems
Dependability assurance of systems embedding machine learning(ML)
components---so called learning-enabled systems (LESs)---is a key step for
their use in safety-critical applications. In emerging standardization and
guidance efforts, there is a growing consensus in the value of using assurance
cases for that purpose. This paper develops a quantitative notion of assurance
that an LES is dependable, as a core component of its assurance case, also
extending our prior work that applied to ML components. Specifically, we
characterize LES assurance in the form of assurance measures: a probabilistic
quantification of confidence that an LES possesses system-level properties
associated with functional capabilities and dependability attributes. We
illustrate the utility of assurance measures by application to a real world
autonomous aviation system, also describing their role both in i) guiding
high-level, runtime risk mitigation decisions and ii) as a core component of
the associated dynamic assurance case.Comment: Author's pre-print version of manuscript accepted for publication in
the Proceedings of the 39th International Conference in Computer Safety,
Reliability, and Security (SAFECOMP 2020