944 research outputs found

    Exponential order statistic models of software reliability growth

    Get PDF
    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented

    Optimal Operational Strategies for an Inspected Component - Statement of the Problem

    Get PDF
    This is the second report on work done on time dependent probabilities initiated in cooperation between the International Atomic Energy Agency (IAEA) and IIASA in 1990. The treatment of the underlying mathematical model is rather theoretical, but the intent is to cover a broad range of applications. The advantage with the problem formulation is that it enables the inclusion also of monetary considerations connected to risks and the actions for decreasing them. The intent in formulating the model is that it will be used for a computerized optimization of selected decision variables. Originally, the formulation was initiated by the problem of optimization of test intervals at nuclear power plants. In this paper the non-destructive testing of major components has been approached. The main result of the paper is the formulation of an optimal rule for decision if continued operation can be considered safe enough. The decision rules integrates the earlier operational history, safety concerns and economic considerations. Also other applications are proposed to be treated within the modeling framework. One specific problem is the selection of the most suitable time instant for a major repair or retrofitting at a plant. The time horizon of the model can be selected either short-term, stretching only over a few weeks, or long-term, to encompass the complete life time of a depository of spent nuclear fuel

    Statistical procedures for certification of software systems

    Get PDF

    On the Reliability Estimation of Stochastic Binary System

    Get PDF
    A stochastic binary system is a multi-component on-off system subject to random independent failures on its components. After potential failures, the state of the subsystem is ruled by a logical function (called structure function) that determines whether the system is operational or not. Stochastic binary systems (SBS) serve as a natural generalization of network reliability analysis, where the goal is to find the probability of correct operation of the system (in terms of connectivity, network diameter or different measures of success). A particular subclass of interest is stochastic monotone binary systems (SMBS), which are characterized by non-decreasing structure. We explore the combinatorics of SBS, which provide building blocks for system reliability estimation, looking at minimal non-operational subsystems, called mincuts. One key concept to understand the underlying combinatorics of SBS is duality. As methods for exact evaluation take exponential time, we discuss the use of Monte Carlo algorithms. In particular, we discuss the F-Monte Carlo method for estimating the reliability polynomial for homogeneous SBS, the Recursive Variance Reduction (RVR) for SMBS, which builds upon the efficient determination of mincuts, and three additional methods that combine in different ways the well--known techniques of Permutation Monte Carlo and Splitting. These last three methods are based on a stochastic process called Creation Process, a temporal evolution of the SBS which is static by definition. All the methods are compared using different topologies, showing large efficiency gains over the basic Monte Carlo scheme.Agencia Nacional de Investigación e InnovaciónMath-AMSU

    Modeling repairable system failure data using NHPP realiability growth mode.

    Get PDF
    Stochastic point processes have been widely used to describe the behaviour of repairable systems. The Crow nonhomogeneous Poisson process (NHPP) often known as the Power Law model is regarded as one of the best models for repairable systems. The goodness-of-fit test rejects the intensity function of the power law model, and so the log-linear model was fitted and tested for goodness-of-fit. The Weibull Time to Failure recurrent neural network (WTTE-RNN) framework, a probabilistic deep learning model for failure data, is also explored. However, we find that the WTTE-RNN framework is only appropriate failure data with independent and identically distributed interarrival times of successive failures, and so cannot be applied to nonhomogeneous Poisson process

    A Computational Framework for Efficient Reliability Analysis of Complex Networks

    Get PDF
    With the growing scale and complexity of modern infrastructure networks comes the challenge of developing efficient and dependable methods for analysing their reliability. Special attention must be given to potential network interdependencies as disregarding these can lead to catastrophic failures. Furthermore, it is of paramount importance to properly treat all uncertainties. The survival signature is a recent development built to effectively analyse complex networks that far exceeds standard techniques in several important areas. Its most distinguishing feature is the complete separation of system structure from probabilistic information. Because of this, it is possible to take into account a variety of component failure phenomena such as dependencies, common causes of failure, and imprecise probabilities without reevaluating the network structure. This cumulative dissertation presents several key improvements to the survival signature ecosystem focused on the structural evaluation of the system as well as the modelling of component failures. A new method is presented in which (inter)-dependencies between components and networks are modelled using vine copulas. Furthermore, aleatory and epistemic uncertainties are included by applying probability boxes and imprecise copulas. By leveraging the large number of available copula families it is possible to account for varying dependent effects. The graph-based design of vine copulas synergizes well with the typical descriptions of network topologies. The proposed method is tested on a challenging scenario using the IEEE reliability test system, demonstrating its usefulness and emphasizing the ability to represent complicated scenarios with a range of dependent failure modes. The numerical effort required to analytically compute the survival signature is prohibitive for large complex systems. This work presents two methods for the approximation of the survival signature. In the first approach system configurations of low interest are excluded using percolation theory, while the remaining parts of the signature are estimated by Monte Carlo simulation. The method is able to accurately approximate the survival signature with very small errors while drastically reducing computational demand. Several simple test systems, as well as two real-world situations, are used to show the accuracy and performance. However, with increasing network size and complexity this technique also reaches its limits. A second method is presented where the numerical demand is further reduced. Here, instead of approximating the whole survival signature only a few strategically selected values are computed using Monte Carlo simulation and used to build a surrogate model based on normalized radial basis functions. The uncertainty resulting from the approximation of the data points is then propagated through an interval predictor model which estimates bounds for the remaining survival signature values. This imprecise model provides bounds on the survival signature and therefore the network reliability. Because a few data points are sufficient to build the interval predictor model it allows for even larger systems to be analysed. With the rising complexity of not just the system but also the individual components themselves comes the need for the components to be modelled as subsystems in a system-of-systems approach. A study is presented, where a previously developed framework for resilience decision-making is adapted to multidimensional scenarios in which the subsystems are represented as survival signatures. The survival signature of the subsystems can be computed ahead of the resilience analysis due to the inherent separation of structural information. This enables efficient analysis in which the failure rates of subsystems for various resilience-enhancing endowments are calculated directly from the survival function without reevaluating the system structure. In addition to the advancements in the field of survival signature, this work also presents a new framework for uncertainty quantification developed as a package in the Julia programming language called UncertaintyQuantification.jl. Julia is a modern high-level dynamic programming language that is ideal for applications such as data analysis and scientific computing. UncertaintyQuantification.jl was built from the ground up to be generalised and versatile while remaining simple to use. The framework is in constant development and its goal is to become a toolbox encompassing state-of-the-art algorithms from all fields of uncertainty quantification and to serve as a valuable tool for both research and industry. UncertaintyQuantification.jl currently includes simulation-based reliability analysis utilising a wide range of sampling schemes, local and global sensitivity analysis, and surrogate modelling methodologies
    • …
    corecore