3,303 research outputs found

    Critical Fault-Detecting Time Evaluation in Software with Discrete Compound Poisson Models

    Get PDF
    Software developers predict their product’s failure rate using reliability growth models that are typically based on nonhomogeneous Poisson (NHP) processes. In this article, we extend that practice to a nonhomogeneous discrete-compound Poisson process that allows for multiple faults of a system at the same time point. Along with traditional reliability metrics such as average number of failures in a time interval, we propose an alternative reliability index called critical fault-detecting time in order to provide more information for software managers making software quality evaluation and critical market policy decisions. We illustrate the significant potential for improved analysis using wireless failure data as well as simulated data

    Markov and Semi-markov Chains, Processes, Systems and Emerging Related Fields

    Get PDF
    This book covers a broad range of research results in the field of Markov and Semi-Markov chains, processes, systems and related emerging fields. The authors of the included research papers are well-known researchers in their field. The book presents the state-of-the-art and ideas for further research for theorists in the fields. Nonetheless, it also provides straightforwardly applicable results for diverse areas of practitioners

    Modeling Preventive Maintenance in Complex Systems

    Get PDF
    This thesis presents an explicit consideration of the impacts of modeling decisions on the resulting maintenance planning. Incomplete data is common in maintenance planning, but is rarely considered explicitly. Robust optimization aims to minimize the impact of uncertainty--here, in contrast, I show how its impact can be explicitly quantified. Doing so allows decision makers to determine whether it is worthwhile to invest in reducing uncertainty about the system or the effect of maintenance. The thesis consists of two parts. Part I uses a case study to show how incomplete data arises and how the data can be used to derive models of a system. A case study based on the US Navy\u27s DDG-51 class of ships illustrates the approach. Analysis of maintenance effort and cost against time suggests that significant effort is expended on numerous small unscheduled maintenance tasks. Some of these corrective tasks are likely the result of deferring maintenance, and, ultimately decreasing the ship reliability. I use a series of graphical tests to identify the underlying failure characteristics of the ship class. The tests suggest that the class follows a renewal process, and can be modeled as a single unit, at least in terms of predicting system lifetime. Part II considers the impact of uncertainty and modeling decisions on preventive maintenance planning. I review the literature on multi-unit maintenance and provide a conceptual discussion of the impact of deferred maintenance on single and multi-unit systems. The single-unit assumption can be used without significant loss of accuracy when modeling preventive maintenance decisions, but leads to underestimating reliability and hence ultimately performance impacts in multi-unit systems. Next, I consider the two main approaches to modeling maintenance impact, Type I and Type II Kijima models and investigate the impact of maintenance level, maintenance interval, and system quality on system lifetime. I quantify the net present value obtained of the system under different maintenance strategies and show how modeling decisions and uncertainty affect how closely the actual system and maintenance policy approach the maximum net present value. Incorrect assumptions about the impact of maintenance on system aging have the most cost, while assumptions about design quality and maintenance level have significant but smaller impact. In these cases, it is generally better to underestimate quality, and to overestimate maintenance level

    Research reports: 1991 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    The basic objectives of the programs, which are in the 28th year of operation nationally, are: (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate an exchange of ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of the participants' institutions; and (4) to contribute to the research objectives of the NASA Centers. The faculty fellows spent 10 weeks at MSFC engaged in a research project compatible with their interests and background and worked in collaboration with a NASA/MSFC colleague. This is a compilation of their research reports for summer 1991

    An investigation of estimation performance for a multivariate Poisson-gamma model with parameter dependency

    Get PDF
    Statistical analysis can be overly reliant on naive assumptions of independence between different data generating processes. This results in having greater uncertainty when estimating underlying characteristics of processes as dependency creates an opportunity to boost sample size by incorporating more data into the analysis. However, this assumes that dependency has been appropriately specified, as mis-specified dependency can provide misleading information from the data. The main aim of this research is to investigate the impact of incorporating dependency into the data analysis. Our motivation for this work is concerned with estimating the reliability of items and as such we have restricted our investigation to study homogeneous Poisson processes (HPP), which can be used to model the rate of occurrence of events such as failures. In an HPP, dependency between rates can occur for numerous reasons. Whether it is similarity in mechanical designs, failure occurrence due to a common management culture or comparable failure count across machines for same failure modes. Multiple types of dependencies are considered. Dependencies can take different forms, such as simple linear dependency measured through the Pearson correlation, rank dependencies which capture non-linear dependencies and tail dependencies where the strength of the dependency may be stronger in extreme events as compared to more moderate one. The estimation of the measure of dependency between correlated processes can be challenging. We develop the research grounded in a Bayes or empirical Bayes inferential framework, where uncertainty in the actual rate of occurrence of a process is modelled with a prior probability distribution. We consider prior distributions to belong to the Gamma distribution given its flexibility and mathematical association with the Poisson process. For dependency modelling between processes we consider copulas which are a convenient and flexible way of capturing a variety of different dependency characteristics between distributions. We use a multivariate Poisson – Gamma probability model. The Poisson process captures aleatory uncertainty, the inherent variability in the data. Whereas the Gamma prior describes the epistemic uncertainty. By pooling processes with correlated underlying mean rate we are able to incorporate data from these processes into the inferential process and reduce the estimation error. There are three key research themes investigated in this thesis. First, to investigate the value in reducing estimation error by incorporating dependency within the analysis via theoretical analysis and simulation experiments. We show that correctly accounting for dependency can significantly reduce the estimation error. The findings should inform analysts a priori as to whether it is worth pursuing a more complex analysis for which the dependency parameter needs to be elicited. Second, to examine the consequences of mis-specifying the degree and form of dependency through controlled simulation experiments. We show the relative robustness of different ways of modelling the dependency using copula and Bayesian methods. The findings should inform analysts about the sensitivity of modelling choices. Third, to show how we can operationalise different methods for representing dependency through an industry case study. We show the consequences for a simple decision problem associated with the provision of spare parts to maintain operation of the industry process when depenency between event rates of the machines is appropriately modelled rather than being treated as independent processes.Statistical analysis can be overly reliant on naive assumptions of independence between different data generating processes. This results in having greater uncertainty when estimating underlying characteristics of processes as dependency creates an opportunity to boost sample size by incorporating more data into the analysis. However, this assumes that dependency has been appropriately specified, as mis-specified dependency can provide misleading information from the data. The main aim of this research is to investigate the impact of incorporating dependency into the data analysis. Our motivation for this work is concerned with estimating the reliability of items and as such we have restricted our investigation to study homogeneous Poisson processes (HPP), which can be used to model the rate of occurrence of events such as failures. In an HPP, dependency between rates can occur for numerous reasons. Whether it is similarity in mechanical designs, failure occurrence due to a common management culture or comparable failure count across machines for same failure modes. Multiple types of dependencies are considered. Dependencies can take different forms, such as simple linear dependency measured through the Pearson correlation, rank dependencies which capture non-linear dependencies and tail dependencies where the strength of the dependency may be stronger in extreme events as compared to more moderate one. The estimation of the measure of dependency between correlated processes can be challenging. We develop the research grounded in a Bayes or empirical Bayes inferential framework, where uncertainty in the actual rate of occurrence of a process is modelled with a prior probability distribution. We consider prior distributions to belong to the Gamma distribution given its flexibility and mathematical association with the Poisson process. For dependency modelling between processes we consider copulas which are a convenient and flexible way of capturing a variety of different dependency characteristics between distributions. We use a multivariate Poisson – Gamma probability model. The Poisson process captures aleatory uncertainty, the inherent variability in the data. Whereas the Gamma prior describes the epistemic uncertainty. By pooling processes with correlated underlying mean rate we are able to incorporate data from these processes into the inferential process and reduce the estimation error. There are three key research themes investigated in this thesis. First, to investigate the value in reducing estimation error by incorporating dependency within the analysis via theoretical analysis and simulation experiments. We show that correctly accounting for dependency can significantly reduce the estimation error. The findings should inform analysts a priori as to whether it is worth pursuing a more complex analysis for which the dependency parameter needs to be elicited. Second, to examine the consequences of mis-specifying the degree and form of dependency through controlled simulation experiments. We show the relative robustness of different ways of modelling the dependency using copula and Bayesian methods. The findings should inform analysts about the sensitivity of modelling choices. Third, to show how we can operationalise different methods for representing dependency through an industry case study. We show the consequences for a simple decision problem associated with the provision of spare parts to maintain operation of the industry process when depenency between event rates of the machines is appropriately modelled rather than being treated as independent processes

    Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis

    Get PDF
    This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements)

    Statistical procedures for certification of software systems

    Get PDF

    A model for availability growth with application to new generation offshore wind farms

    Get PDF
    A model for availability growth is developed to capture the effect of systemic risk prior to construction of a complex system. The model has been motivated by new generation offshore wind farms where investment decisions need to be taken before test and operational data are available. We develop a generic model to capture the systemic risks arising from innovation in evolutionary system designs. By modelling the impact of major and minor interventions to mitigate weaknesses and to improve the failure and restoration processes of subassemblies, we are able to measure the growth in availability performance of the system. We describe the choices made in modelling our particular industrial setting using an example for a typical UK Round III offshore wind farm. We obtain point estimates of the expected availability having populated the simulated model using appropriate judgemental and empirical data. We show the relative impact of modelling systemic risk on system availability performance in comparison with estimates obtained (Lesley Walls) from typical system availability modelling assumptions used in offshore wind applications. While modelling growth in availability is necessary for meaningful decision support in developing complex systems such as offshore wind farms, we also discuss the relative value of explicitly articulating epistemic uncertainties
    corecore