6 research outputs found

    Degradation Models and Implied Lifetime Distributions

    Get PDF
    In experiments where failure times are sparse, degradation analysis is useful for the analysis of failure time distributions in reliability studies. This research investigates the link between a practitioner\u27s selected degradation model and the resulting lifetime model. Simple additive and multiplicative models with single random effects are featured. Results show that seemingly innocuous assumptions of the degradation path create surprising restrictions on the lifetime distribution. These constraints are described in terms of failure rate and distribution classes

    Formal Analysis of Security Metrics and Risk

    Get PDF
    Abstract. Security metrics are usually defined informally and, therefore, the rigourous analysis of these metrics is a hard task. This analysis is required to identify the existing relations between the security metrics, which try to quantify the same quality: security. Risk, computed as Annualised Loss Expectancy, is often used in order to give the overall assessment of security as a whole. Risk and security metrics are usually defined separately and the relation between these indicators have not been considered thoroughly. In this work we fill this gap by providing a formal definition of risk and formal analysis of relations between security metrics and risk

    Reliability and Condition-Based Maintenance Analysis of Deteriorating Systems Subject to Generalized Mixed Shock Model

    Get PDF
    For successful commercialization of evolving devices (e.g., micro-electro-mechanical systems, and biomedical devices), there must be new research focusing on reliability models and analysis tools that can assist manufacturing and maintenance of these devices. These advanced systems may experience multiple failure processes that compete against each other. Two major failure processes are identified to be deteriorating or degradation processes (e.g., wear, fatigue, erosion, corrosion) and random shocks. When these failure processes are dependent, it is a challenging problem to predict reliability of complex systems. This research aims to develop reliability models by exploring new aspects of dependency between competing risks of degradation-based and shock-based failure considering a generalized mixed shock model, and to develop new and effective condition-based maintenance policies based on the developed reliability models. In this research, different aspects of dependency are explored to accurately estimate the reliability of complex systems. When the degradation rate is accelerated as a result of withstanding a particular shock pattern, we develop reliability models with a changing degradation rate for four different shock patterns. When the hard failure threshold reduces due to changes in degradation, we investigate reliability models considering the dependence of the hard failure threshold on the degradation level for two different scenarios. More generally, when the degradation rate and the hard failure threshold can simultaneously transition multiple times, we propose a rich reliability model for a new generalized mixed shock model that is a combination of extreme shock model, δ-shock model and run shock model. This general assumption reflects complex behaviors associated with modern systems and structures that experience multiple sources of external shocks. Based on the developed reliability models, we introduce new condition-based maintenance strategies by including various maintenance actions (e.g., corrective replacement, preventive replacement, and imperfect repair) to minimize the expected long-run average maintenance cost rate. The decisions for maintenance actions are made based on the health condition of systems that can be observed through periodic inspection. The reliability and maintenance models developed in this research can provide timely and effective tools for decision-makers in manufacturing to economically optimize operational decisions for improving reliability, quality and productivity.Industrial Engineering, Department o

    Quantitative Evaluation and Reevaluation of Security in Services

    Get PDF
    Services are software components or systems designed to support interoperable machine or application-oriented interaction over a network. The popularity of services grows because they are easily accessible, very flexible, provide reach functionality, and can constitute more complex services. During the service selection, the user considers not only functional requirements to a service but also security requirements. The user would like to be aware that security of the service satisfies security requirements before starting the exploitation of the service, i.e., before the service is granted to access assets of the user. Moreover, the user wants to be sure that security of the service satisfies security requirements during the exploitation which may last for a long period. Pursuing these two goals require security of the service to be evaluated before the exploitation and continuously reevaluated during the exploitation. This thesis aims at a framework consisting of several quantitative methods for evaluation and continuous reevaluation of security in services. The methods should help a user to select a service and to control the service security level during the exploitation. The thesis starts with the formal model for general quantitative security metrics and for risk that may be used for the evaluation of security in services. Next, we adjust the computation of security metrics with a refined model of an attacker. Then, the thesis proposes a general method for the evaluation of security of a complex service composed from several simple services using different security metrics. The method helps to select the most secure design of the complex service. In addition, the thesis describes an approach based on the Usage Control (UCON) model for continuous reevaluation of security in services. Finally, the thesis discusses several strategies for a cost-effective decision making in the UCON unde

    Risk-based inspection planning of rail infrastructure considering operational resilience

    Get PDF
    This research proposes a response model for a disrupted railway track inspection plan. The proposed model takes the form of an active acceptance risk strategy while having been developed under the disruption risk management framework. The response model entails two components working in a series; an integrated Nonlinear Autoregressive model with eXogenous input Neural Network (iNARXNN), alongside a risk-based value measure for predicting track measurements data and an output valuation. The neural network fuses itself to Bayesian inference, risk aversion and a data-driven modelling approach, as a means of ensuring the utmost standard of prediction ability. Testing on a real dataset indicates that the iNARXNN model provides a mean prediction accuracy rate of 95%, while also successfully preserving data characteristics across both time and frequency domains. This research also proposes a network-based model that highlights the value of accepting iNARXNN’s outputs. The value is formulated as the ratio of rescheduling cost to a change in the risk level from a missed opportunity to repair a defective track, i.e., late defect detection. The value model demonstrates how the resilience action is useful for determining a rescheduling strategy that has (negative) value when dealing with a disrupted track inspection pla

    Multi-State Reliability Analysis of Nuclear Power Plant Systems

    Get PDF
    The probabilistic safety assessment of engineering systems involving high-consequence low-probability events is stochastic in nature due to uncertainties inherent in time to an event. The event could be a failure, repair, maintenance or degradation associated with system ageing. Accurate reliability prediction accounting for these uncertainties is a precursor to considerably good risk assessment model. Stochastic Markov reliability models have been constructed to quantify basic events in a static fault tree analysis as part of the safety assessment process. The models assume that a system transits through various states and that the time spent in a state is statistically random. The system failure probability estimates of these models assuming constant transition rate are extensively utilized in the industry to obtain failure frequency of catastrophic events. An example is core damage frequency in a nuclear power plant where the initiating event is loss of cooling system. However, the assumption of constant state transition rates for analysis of safety critical systems is debatable due to the fact that these rates do not properly account for variability in the time to an event. An ill-consequence of such an assumption is conservative reliability prediction leading to addition of unnecessary redundancies in modified versions of prototype designs, excess spare inventory and an expensive maintenance policy with shorter maintenance intervals. The reason for this discrepancy is that a constant transition rate is always associated with an exponential distribution for the time spent in a state. The subject matter of this thesis is to develop sophisticated mathematical models to improve predictive capabilities that accurately represent reliability of an engineering system. The generalization of the Markov process called the semi-Markov process is a well known stochastic process, yet it is not well explored in the reliability analysis of nuclear power plant systems. The continuous-time, discrete-state semi-Markov process model is a stochastic process model that describes the state transitions through a system of integral equations which can be solved using the trapezoidal rule. The primary objective is to determine the probability of being in each state. This process model ensures that time spent in the states can be represented by a suitable non-exponential distribution thus capturing the variability in the time to event. When exponential distribution is assumed for all the state transitions, the model reduces to the standard Markov model. This thesis illustrates the proposed concepts using basic examples and then develops advanced case studies for nuclear cooling systems, piping systems, digital instrumentation and control (I&C) systems, fire modelling and system maintenance. The first case study on nuclear component cooling water system (NCCW) shows that the proposed technique can be used to solve a fault tree involving redundant repairable components to yield initiating event probability quantifying the loss of cooling system. The time-to-failure of the pump train is assumed to be a Weibull distribution and the resulting system failure probability is validated using a Monte Carlo simulation of the corresponding reliability block diagram. Nuclear piping systems develop flaws, leaks and ruptures due to various underlying damage mechanisms. This thesis presents a general model for evaluating rupture frequencies of such repairable piping systems. The proposed model is able to incorporate the effect of aging related degradation of piping systems. Time dependent rupture frequencies are computed and the influence of inspection intervals on the piping rupture probability is investigated. There is an increasing interest worldwide in the installation of digital instrumentation and control systems in nuclear power plants. The main feedwater valve (MFV) controller system is used for regulating the water level in a steam generator. An existing Markov model in the literature is extended to a semi-Markov model to accurately predict the controller system reliability. The proposed model considers variability in the time to output from the computer to the controller with intrinsic software and mechanical failures. State-of-the-art time-to-flashover fire models used in the nuclear industry are either based on conservative analytical equations or computationally intensive simulation models. The proposed semi-Markov based case study describes an innovative fire growth model that allows prediction of fire development and containment including time to flashover. The model considers variability in time when transiting from one stage of the fire to the other. The proposed model is a reusable framework that can be of importance to product design engineers and fire safety regulators. Operational unavailability is at risk of being over-estimated because of assuming a constant degradation rate in a slowly ageing system. In the last case study, it is justified that variability in time to degradation has a remarkable effect on the choice of an effective maintenance policy. The proposed model is able to accurately predict the optimal maintenance interval assuming a non-exponential time to degradation. Further, the model reduces to a binary state Markov model equivalent to a classic probabilistic risk assessment model if the degradation and maintenance states are eliminated. In summary, variability in time to an event is not properly captured in existing Markov type reliability models though they are stochastic and account for uncertainties. The proposed semi-Markov process models are easy to implement, faster than intensive simulations and accurately model the reliability of engineering systems
    corecore