1,093 research outputs found
Propagation of epistemic uncertainty in queueing models with unreliable server using chaos expansions
In this paper, we develop a numerical approach based on Chaos expansions to
analyze the sensitivity and the propagation of epistemic uncertainty through a
queueing systems with breakdowns. Here, the quantity of interest is the
stationary distribution of the model, which is a function of uncertain
parameters. Polynomial chaos provide an efficient alternative to more
traditional Monte Carlo simulations for modelling the propagation of
uncertainty arising from those parameters. Furthermore, Polynomial chaos
expansion affords a natural framework for computing Sobol' indices. Such
indices give reliable information on the relative importance of each uncertain
entry parameters. Numerical results show the benefit of using Polynomial Chaos
over standard Monte-Carlo simulations, when considering statistical moments and
Sobol' indices as output quantities
Recommended from our members
Examination of Bayesian belief network for safety assessment of nuclear computer-based systems
We report here on a continuation of work on the Bayesian Belief Network (BBN)model described in [Fenton, Littlewood et al. 1998]. As explained in the previous deliverable, our model concerns one part of the safety assessment task for computer and software based nuclear systems. We have produced a first complete, functioning version of our BBN model by eliciting a large numerical node probability table (NPT) required for our ‘Design Process Performance’ variable. The requirement for such large numerical NPTs poses some difficult questions about how, in general, large NPTs should be elicited from domain experts. We report about the methods we have devised to support the expert in building and validating a BBN. On the one hand, we have proceeded by eliciting approximate descriptions of the expert’s probabilistic beliefs, in terms of properties like stochastic orderings among distributions; on the other hand, we have explored ways of presenting to the expert visual and algebraic descriptions of relations among variables in the BBN, to assist the expert in an ongoing assessment of the validity of the BBN
Towards Accurate Estimation of Error Sensitivity in Computer Systems
Fault injection is an increasingly important method for assessing, measuringand observing the system-level impact of hardware and software faults in computer systems. This thesis presents the results of a series of experimental studies in which fault injection was used to investigate the impact of bit-flip errors on program execution. The studies were motivated by the fact that transient hardware faults in microprocessors can cause bit-flip errors that can propagate to the microprocessors instruction set architecture registers and main memory. As the rate of such hardware faults is expected to increase with technology scaling, there is a need to better understand how these errors (known as ‘soft errors’) influence program execution, especially in safety-critical systems.Using ISA-level fault injection, we investigate how five aspects, or factors, influence the error sensitivity of a program. We define error sensitivity as the conditional probability that a bit-flip error in live data in an ISA-register or main-memory word will cause a program to produce silent data corruption (SDC; i.e., an erroneous result). We also consider the estimation of a measure called SDC count, which represents the number of ISA-level bit flips that cause an SDC.The five factors addressed are (a) the inputs processed by a program, (b) the level of compiler optimization, (c) the implementation of the program in the source code, (d) the fault model (single bit flips vs double bit flips) and (e)the fault-injection technique (inject-on-write vs inject-on-read). Our results show that these factors affect the error sensitivity in many ways; some factors strongly impact the error sensitivity or SDC count whereas others show a weaker impact. For example, our experiments show that single bit flips tend to cause SDCs more than double bit flips; compiler optimization positively impacts the SDC count but not necessarily the error sensitivity; the error sensitivity varies between 20% and 50% among the programs we tested; and variations in input affect the error sensitivity significantly for most of the tested programs
Validation of Ultrahigh Dependability for Software-Based Systems
Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software
The safety case and the lessons learned for the reliability and maintainability case
This paper examine the safety case and the lessons learned for the reliability and maintainability case
A note on sensitivity analysis for PH approximation (New Developments on Mathematical Decision Making Under Uncertainty)
This paper presents the moment-based approximation for model depandability when the uncertainty of model parameters is considered. The propagation of uncertainty of model parameters can be estimated by regarding the model parameters as random variables. However, statistical model often involves the non-exponential distribution such as Weibull distribution, which leads to the high computation cost of uncertainty analysis. In this paper, we focus on the Phase-type(PH) distribution to overcome the difficulty of computation for model contains Weibull distribution
Experimental analysis of computer system dependability
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance
Recommended from our members
Formalising Engineering Judgement on Software Dependability via Belief Networks
- …