1,462 research outputs found

    Propagation of epistemic uncertainty in queueing models with unreliable server using chaos expansions

    Full text link
    In this paper, we develop a numerical approach based on Chaos expansions to analyze the sensitivity and the propagation of epistemic uncertainty through a queueing systems with breakdowns. Here, the quantity of interest is the stationary distribution of the model, which is a function of uncertain parameters. Polynomial chaos provide an efficient alternative to more traditional Monte Carlo simulations for modelling the propagation of uncertainty arising from those parameters. Furthermore, Polynomial chaos expansion affords a natural framework for computing Sobol' indices. Such indices give reliable information on the relative importance of each uncertain entry parameters. Numerical results show the benefit of using Polynomial Chaos over standard Monte-Carlo simulations, when considering statistical moments and Sobol' indices as output quantities

    Towards Quantification of Assurance for Learning-enabled Components

    Full text link
    Perception, localization, planning, and control, high-level functions often organized in a so-called pipeline, are amongst the core building blocks of modern autonomous (ground, air, and underwater) vehicle architectures. These functions are increasingly being implemented using learning-enabled components (LECs), i.e., (software) components leveraging knowledge acquisition and learning processes such as deep learning. Providing quantified component-level assurance as part of a wider (dynamic) assurance case can be useful in supporting both pre-operational approval of LECs (e.g., by regulators), and runtime hazard mitigation, e.g., using assurance-based failover configurations. This paper develops a notion of assurance for LECs based on i) identifying the relevant dependability attributes, and ii) quantifying those attributes and the associated uncertainty, using probabilistic techniques. We give a practical grounding for our work using an example from the aviation domain: an autonomous taxiing capability for an unmanned aircraft system (UAS), focusing on the application of LECs as sensors in the perception function. We identify the applicable quantitative measures of assurance, and characterize the associated uncertainty using a non-parametric Bayesian approach, namely Gaussian process regression. We additionally discuss the relevance and contribution of LEC assurance to system-level assurance, the generalizability of our approach, and the associated challenges.Comment: 8 pp, 4 figures, Appears in the proceedings of EDCC 201

    Validation of Ultrahigh Dependability for Software-Based Systems

    Get PDF
    Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software

    Towards Accurate Estimation of Error Sensitivity in Computer Systems

    Get PDF
    Fault injection is an increasingly important method for assessing, measuringand observing the system-level impact of hardware and software faults in computer systems. This thesis presents the results of a series of experimental studies in which fault injection was used to investigate the impact of bit-flip errors on program execution. The studies were motivated by the fact that transient hardware faults in microprocessors can cause bit-flip errors that can propagate to the microprocessors instruction set architecture registers and main memory. As the rate of such hardware faults is expected to increase with technology scaling, there is a need to better understand how these errors (known as ‘soft errors’) influence program execution, especially in safety-critical systems.Using ISA-level fault injection, we investigate how five aspects, or factors, influence the error sensitivity of a program. We define error sensitivity as the conditional probability that a bit-flip error in live data in an ISA-register or main-memory word will cause a program to produce silent data corruption (SDC; i.e., an erroneous result). We also consider the estimation of a measure called SDC count, which represents the number of ISA-level bit flips that cause an SDC.The five factors addressed are (a) the inputs processed by a program, (b) the level of compiler optimization, (c) the implementation of the program in the source code, (d) the fault model (single bit flips vs double bit flips) and (e)the fault-injection technique (inject-on-write vs inject-on-read). Our results show that these factors affect the error sensitivity in many ways; some factors strongly impact the error sensitivity or SDC count whereas others show a weaker impact. For example, our experiments show that single bit flips tend to cause SDCs more than double bit flips; compiler optimization positively impacts the SDC count but not necessarily the error sensitivity; the error sensitivity varies between 20% and 50% among the programs we tested; and variations in input affect the error sensitivity significantly for most of the tested programs

    Architecture-based Evolution of Dependable Software-intensive Systems

    Get PDF
    This cumulative habilitation thesis, proposes concepts for (i) modelling and analysing dependability based on architectural models of software-intensive systems early in development, (ii) decomposition and composition of modelling languages and analysis techniques to enable more flexibility in evolution, and (iii) bridging the divergent levels of abstraction between data of the operation phase, architectural models and source code of the development phase

    The safety case and the lessons learned for the reliability and maintainability case

    Get PDF
    This paper examine the safety case and the lessons learned for the reliability and maintainability case
    • …
    corecore