393,156 research outputs found
RESTART Simulation of Non-Markov Consecutive-K-Out-of-N: F Repairable Systems
The reliability of consecutive-k-out-of-n: F repairable systems and (k−1)-step Markov dependence is studied. The model analyzed in this paper is more general than those of previous studies given that repair time and component lifetimes are random variables that follow a general distribution. The system has one repair service which adopts a priority repair rule based on system failure risk. Since crude simulation has proved to be inefficient for highly dependable systems, the RESTART method was used for the estimation of steady-state unavailability, MTBF and unreliability. Probabilities up to the order of 10−16 have been accurately estimated with little computational effort. In this method, a number of simulation retrials are performed when the process enters regions of the state space where the chance of occurrence of a rare event (e.g., a system failure) is higher. The main difficulty for the application of this method is to find a suitable function, called the importance function, to define the regions. Given the simplicity involved in changing some model assumptions with RESTART, the importance function used in this paper could be useful for dependability estimation of many systems
Cross-layer system reliability assessment framework for hardware faults
System reliability estimation during early design phases facilitates informed decisions for the integration of effective protection mechanisms against different classes of hardware faults. When not all system abstraction layers (technology, circuit, microarchitecture, software) are factored in such an estimation model, the delivered reliability reports must be excessively pessimistic and thus lead to unacceptably expensive, over-designed systems. We propose a scalable, cross-layer methodology and supporting suite of tools for accurate but fast estimations of computing systems reliability. The backbone of the methodology is a component-based Bayesian model, which effectively calculates system reliability based on the masking probabilities of individual hardware and software components considering their complex interactions. Our detailed experimental evaluation for different technologies, microarchitectures, and benchmarks demonstrates that the proposed model delivers very accurate reliability estimations (FIT rates) compared to statistically significant but slow fault injection campaigns at the microarchitecture level.Peer ReviewedPostprint (author's final draft
Metamodel-based importance sampling for structural reliability analysis
Structural reliability methods aim at computing the probability of failure of
systems with respect to some prescribed performance functions. In modern
engineering such functions usually resort to running an expensive-to-evaluate
computational model (e.g. a finite element model). In this respect simulation
methods, which may require runs cannot be used directly. Surrogate
models such as quadratic response surfaces, polynomial chaos expansions or
kriging (which are built from a limited number of runs of the original model)
are then introduced as a substitute of the original model to cope with the
computational cost. In practice it is almost impossible to quantify the error
made by this substitution though. In this paper we propose to use a kriging
surrogate of the performance function as a means to build a quasi-optimal
importance sampling density. The probability of failure is eventually obtained
as the product of an augmented probability computed by substituting the
meta-model for the original performance function and a correction term which
ensures that there is no bias in the estimation even if the meta-model is not
fully accurate. The approach is applied to analytical and finite element
reliability problems and proves efficient up to 100 random variables.Comment: 20 pages, 7 figures, 2 tables. Preprint submitted to Probabilistic
Engineering Mechanic
- …