1,462 research outputs found
Propagation of epistemic uncertainty in queueing models with unreliable server using chaos expansions
In this paper, we develop a numerical approach based on Chaos expansions to
analyze the sensitivity and the propagation of epistemic uncertainty through a
queueing systems with breakdowns. Here, the quantity of interest is the
stationary distribution of the model, which is a function of uncertain
parameters. Polynomial chaos provide an efficient alternative to more
traditional Monte Carlo simulations for modelling the propagation of
uncertainty arising from those parameters. Furthermore, Polynomial chaos
expansion affords a natural framework for computing Sobol' indices. Such
indices give reliable information on the relative importance of each uncertain
entry parameters. Numerical results show the benefit of using Polynomial Chaos
over standard Monte-Carlo simulations, when considering statistical moments and
Sobol' indices as output quantities
Towards Quantification of Assurance for Learning-enabled Components
Perception, localization, planning, and control, high-level functions often
organized in a so-called pipeline, are amongst the core building blocks of
modern autonomous (ground, air, and underwater) vehicle architectures. These
functions are increasingly being implemented using learning-enabled components
(LECs), i.e., (software) components leveraging knowledge acquisition and
learning processes such as deep learning. Providing quantified component-level
assurance as part of a wider (dynamic) assurance case can be useful in
supporting both pre-operational approval of LECs (e.g., by regulators), and
runtime hazard mitigation, e.g., using assurance-based failover configurations.
This paper develops a notion of assurance for LECs based on i) identifying the
relevant dependability attributes, and ii) quantifying those attributes and the
associated uncertainty, using probabilistic techniques. We give a practical
grounding for our work using an example from the aviation domain: an autonomous
taxiing capability for an unmanned aircraft system (UAS), focusing on the
application of LECs as sensors in the perception function. We identify the
applicable quantitative measures of assurance, and characterize the associated
uncertainty using a non-parametric Bayesian approach, namely Gaussian process
regression. We additionally discuss the relevance and contribution of LEC
assurance to system-level assurance, the generalizability of our approach, and
the associated challenges.Comment: 8 pp, 4 figures, Appears in the proceedings of EDCC 201
Recommended from our members
Examination of Bayesian belief network for safety assessment of nuclear computer-based systems
We report here on a continuation of work on the Bayesian Belief Network (BBN)model described in [Fenton, Littlewood et al. 1998]. As explained in the previous deliverable, our model concerns one part of the safety assessment task for computer and software based nuclear systems. We have produced a first complete, functioning version of our BBN model by eliciting a large numerical node probability table (NPT) required for our ‘Design Process Performance’ variable. The requirement for such large numerical NPTs poses some difficult questions about how, in general, large NPTs should be elicited from domain experts. We report about the methods we have devised to support the expert in building and validating a BBN. On the one hand, we have proceeded by eliciting approximate descriptions of the expert’s probabilistic beliefs, in terms of properties like stochastic orderings among distributions; on the other hand, we have explored ways of presenting to the expert visual and algebraic descriptions of relations among variables in the BBN, to assist the expert in an ongoing assessment of the validity of the BBN
Validation of Ultrahigh Dependability for Software-Based Systems
Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software
Towards Accurate Estimation of Error Sensitivity in Computer Systems
Fault injection is an increasingly important method for assessing, measuringand observing the system-level impact of hardware and software faults in computer systems. This thesis presents the results of a series of experimental studies in which fault injection was used to investigate the impact of bit-flip errors on program execution. The studies were motivated by the fact that transient hardware faults in microprocessors can cause bit-flip errors that can propagate to the microprocessors instruction set architecture registers and main memory. As the rate of such hardware faults is expected to increase with technology scaling, there is a need to better understand how these errors (known as ‘soft errors’) influence program execution, especially in safety-critical systems.Using ISA-level fault injection, we investigate how five aspects, or factors, influence the error sensitivity of a program. We define error sensitivity as the conditional probability that a bit-flip error in live data in an ISA-register or main-memory word will cause a program to produce silent data corruption (SDC; i.e., an erroneous result). We also consider the estimation of a measure called SDC count, which represents the number of ISA-level bit flips that cause an SDC.The five factors addressed are (a) the inputs processed by a program, (b) the level of compiler optimization, (c) the implementation of the program in the source code, (d) the fault model (single bit flips vs double bit flips) and (e)the fault-injection technique (inject-on-write vs inject-on-read). Our results show that these factors affect the error sensitivity in many ways; some factors strongly impact the error sensitivity or SDC count whereas others show a weaker impact. For example, our experiments show that single bit flips tend to cause SDCs more than double bit flips; compiler optimization positively impacts the SDC count but not necessarily the error sensitivity; the error sensitivity varies between 20% and 50% among the programs we tested; and variations in input affect the error sensitivity significantly for most of the tested programs
Architecture-based Evolution of Dependable Software-intensive Systems
This cumulative habilitation thesis, proposes concepts for (i) modelling and analysing dependability based on architectural models of software-intensive systems early in development, (ii) decomposition and composition of modelling languages and analysis techniques to enable more flexibility in evolution, and (iii) bridging the divergent levels of abstraction between data of the operation phase, architectural models and source code of the development phase
The safety case and the lessons learned for the reliability and maintainability case
This paper examine the safety case and the lessons learned for the reliability and maintainability case
- …