87 research outputs found

    Statistical Methods for Estimating the Minimum Thickness Along a Pipeline

    Get PDF
    Pipeline integrity is important because leaks can result in serious economic or environmental losses. Inspection information from a sample of locations along the pipeline can be used to estimate corrosion levels. The traditional parametric model method for this problem is to estimate parameters of a specified corrosion distribution and then to use these parameters to estimate the minimum thickness in a pipeline. Inferences using this method are, however, highly sensitive to the distributional assumption. Extreme value modeling provides a more robust method of estimation if a sufficient amount of data is available. For example, the block-minima method produces a more robust method to estimate the minimum thickness in a pipeline. To use the block-minima method, however, one must carefully choose the size of the blocks to be used in the analysis. In this article, we use simulation to compare the properties of different models for estimating minimum pipeline thickness, investigate the effect of using different size blocks, and illustrate the methods using pipeline inspection data

    Statistical Prediction Based on Censored Life Data

    Get PDF

    Product Component Genealogy Modeling and Field‐failure Prediction

    Get PDF
    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life‐cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can be achieved in predicting time to failure, thus yielding more accurate field‐failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures

    Assessing Risk of a Serious Failure Mode Based on Limited Field Data

    Get PDF
    Many consumer products are designed and manufactured so that the probability of failure during the technological life of the product is small. Most product units in the field retire before they fail. Even though the number of failures of such products is small, there is still a need to model and predict field failures for purposes of risk assessment in applications that involve safety. Challenges in the modeling and prediction of failures arise because the retirement times are often unknown, few failures have been reported, and there are delays in field failure reporting. Motivated by an application to assess the risk of failure for a particular product, we develop a statistical prediction procedure that considers the impact of product retirements and reporting delays. Based on the developed method, we provide the point predictions for the cumulative number of reported failures over a future time period, and corresponding prediction intervals to quantify uncertainty. We also conduct sensitivity analysis to assess the effects of different assumptions on failure-time and retirement distributions

    Understanding and Addressing the Unbounded “Likelihood” Problem

    Get PDF
    The joint probability density function, evaluated at the observed data, is commonly used as the likelihood function to compute maximum likelihood estimates. For some models, however, there exist paths in the parameter space along which this density-approximation likelihood goes to infinity and maximum likelihood estimation breaks down. In all applications, however, observed data are really discrete due to the round-off or grouping error of measurements. The “correct likelihood” based on interval censoring can eliminate the problem of an unbounded likelihood. This article categorizes the models leading to unbounded likelihoods into three groups and illustrates the density-approximation breakdown with specific examples. Although it is usually possible to infer how given data were rounded, when this is not possible, one must choose the width for interval censoring, so we study the effect of the round-off on estimation. We also give sufficient conditions for the joint density to provide the same maximum likelihood estimate as the correct likelihood, as the round-off error goes to zero

    The Number of MCMC Draws Needed to Compute Bayesian Credible Bounds

    Get PDF
    In the past 20 years, there has been a staggering increase in the use of Bayesian statistical inference, based on Markov chain Monte Carlo (MCMC) methods, to estimate model parameters and other quantities of interest. This trend exists in virtually all areas of engineering and science. In a typical application, researchers will report estimates of parametric functions (e.g., quantiles, probabilities, or predictions of future outcomes) and corresponding intervals from MCMC methods. One difficulty with the use of inferential methods based on Monte Carlo (MC) is that reported results may be inaccurate due to MC error. MC error, however, can be made arbitrarily small by increasing the number of MC draws. Most users of MCMC methods seem to use indirect diagnostics, trial-and-error, or guess-work to decide how long to run a MCMC algorithm and accuracy of MCMC output results is rarely reported. Unless careful analysis is done, reported numerical results may contain digits that are completely meaningless. In this article, we describe an algorithm to provide direct guidance on the number of MCMC draws needed to achieve a desired amount of precision (i.e., a specified number of accurate significant digits) for Bayesian credible interval endpoints
    • 

    corecore