448 research outputs found

    A Review of Accelerated Test Models

    Get PDF
    Engineers in the manufacturing industries have used accelerated test (AT) experiments for many decades. The purpose of AT experiments is to acquire reliability information quickly. Test units of a material, component, subsystem or entire systems are subjected to higher-than-usual levels of one or more accelerating variables such as temperature or stress. Then the AT results are used to predict life of the units at use conditions. The extrapolation is typically justified (correctly or incorrectly) on the basis of physically motivated models or a combination of empirical model fitting with a sufficient amount of previous experience in testing similar units. The need to extrapolate in both time and the accelerating variables generally necessitates the use of fully parametric models. Statisticians have made important contributions in the development of appropriate stochastic models for AT data [typically a distribution for the response and regression relationships between the parameters of this distribution and the accelerating variable(s)], statistical methods for AT planning (choice of accelerating variable levels and allocation of available test units to those levels) and methods of estimation of suitable reliability metrics. This paper provides a review of many of the AT models that have been used successfully in this area.Comment: Published at http://dx.doi.org/10.1214/088342306000000321 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Big Data and Reliability Applications: The Complexity Dimension

    Full text link
    Big data features not only large volumes of data but also data with complicated structures. Complexity imposes unique challenges in big data analytics. Meeker and Hong (2014, Quality Engineering, pp. 102-116) provided an extensive discussion of the opportunities and challenges in big data and reliability, and described engineering systems that can generate big data that can be used in reliability analysis. Meeker and Hong (2014) focused on large scale system operating and environment data (i.e., high-frequency multivariate time series data), and provided examples on how to link such data as covariates to traditional reliability responses such as time to failure, time to recurrence of events, and degradation measurements. This paper intends to extend that discussion by focusing on how to use data with complicated structures to do reliability analysis. Such data types include high-dimensional sensor data, functional curve data, and image streams. We first provide a review of recent development in those directions, and then we provide a discussion on how analytical methods can be developed to tackle the challenging aspects that arise from the complexity feature of big data in reliability applications. The use of modern statistical methods such as variable selection, functional data analysis, scalar-on-image regression, spatio-temporal data models, and machine learning techniques will also be discussed.Comment: 28 pages, 7 figure

    Prediction of remaining life of power transformers based on left truncated and right censored lifetime data

    Get PDF
    Prediction of the remaining life of high-voltage power transformers is an important issue for energy companies because of the need for planning maintenance and capital expenditures. Lifetime data for such transformers are complicated because transformer lifetimes can extend over many decades and transformer designs and manufacturing practices have evolved. We were asked to develop statistically-based predictions for the lifetimes of an energy company's fleet of high-voltage transmission and distribution transformers. The company's data records begin in 1980, providing information on installation and failure dates of transformers. Although the dataset contains many units that were installed before 1980, there is no information about units that were installed and failed before 1980. Thus, the data are left truncated and right censored. We use a parametric lifetime model to describe the lifetime distribution of individual transformers. We develop a statistical procedure, based on age-adjusted life distributions, for computing a prediction interval for remaining life for individual transformers now in service. We then extend these ideas to provide predictions and prediction intervals for the cumulative number of failures, over a range of time, for the overall fleet of transformers.Comment: Published in at http://dx.doi.org/10.1214/00-AOAS231 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Probability of Detection in Structural Health Monitoring

    Get PDF
    Structural Health Monitoring (SHM) is being proposed to replace traditional scheduled NDE inspections. This is raising questions about how to quantify Probability of Detection (POD) and whether the POD statistical methods of MIL-HDBK 1823 can still be used. The answer depends on the application, the nature of the SHM information, and how that information is mapped into a detect/not-detect decision. This talk will outline some of the issues involved in characterizing POD for different kinds of SHM applications

    On the Residual Lifetime of Surviving Components from a Failed Coherent Systems

    Get PDF
    In this paper, we consider the residual lifetimes of surviving components of a failed coherent system with n independent and identically distributed components, given that before time t1 (t1 \u3e 0), exactly r (r \u3c n) components have failed and, at time t2 (t2 \u3e t1) the system just failed. Some aging properties and preservation results of the residual lives of the surviving components of such systems are obtained. Also some examples and applications are given

    Using Degradation Models to Assess Pipeline Life

    Get PDF
    Longitudinal inspections of thickness at particular locations along a pipeline provide useful information to assess the lifetime of the pipeline. In applications with different mechanisms of corrosion processes, we have observed various types of general degradation paths. We present two applications of fitting a degradation model to describe the corrosion initiation and growth behavior in the pipeline. We use a Bayesian approach for parameter estimation for the degradation model. The failure-time and remaining lifetime distributions are derived from the degradation model, and we compute Bayesian estimates and credible intervals of the failure-time and remaining lifetime distributions for both individual segments and an entire pipeline circuit

    Statistical Tools for the Rapid Development & Evaluation of High-Reliability Products

    Get PDF
    Today\u27s manufacturers face increasingly intense global competition. To remain profitable, they are challenged to design, develop, test, and manufacture high reliability products in ever-shorter product-cycle times and, at the same time, remain within stringent cost constraints. Design, manufacturing, and reliability engineers have developed an impressive array of tools for producing reliable products. These tools will continue to be important. However, due to changes in the way that new product-concepts are being developed and brought to market, there is need for change in the usual methods used for design-for-reliability and reliability testing, assessment, and improvement programs. This tutorial uses a conceptual degradation-based reliability model to describe the role of, and need for, integration of reliability data sources. These sources include accelerated degradation testing, accelerated life testing (for materials and components), accelerated multifactor robust-design experiments and over-stress prototype testing (for subsystems and systems), and the use of field data (especially early-production) to produce a robust, high-reliability product and to provide a process for continuing improvement of reliability of existing and future products. Manufacturers need to develop economical and timely methods of obtaining, at each step of the product design and development process, the information needed to meet overall reliability goals. We emphasize the need for intensive, effective upstream testing of product materials, components, and design concept

    Weibull Prediction Intervals for a Future Number of Failures

    Get PDF
    This article evaluates exact coverage probabilities of approximate prediction intervals for the number offailures that will be observed in a future inspection of a sample of units, based only on the results of the first in-service inspection of the sample. The failure time of such units is modeled with a Weibulldistribution having a given shape parameter value. We illustrate the use of the procedures by using data from a nuclear power plant heat exchanger. The results suggest that the likelihood-based predictionintervals perform better than the alternatives

    Use of Truncated Regression Methods to Estimate the Shelf Life of a Product from Incomplete Historical Data

    Get PDF
    Over a period of time, experiments were conducted to estimate the shelf life of a product. Each trial used a combination of a temperature level and an additive concentration that was used to inhibit spoilage. The policy was to terminate each trial after 270 days, even if the product sample had not yet failed. Particularly at the lower temperatures, some trials ended before the product sample reached the failed state. No records were kept on the number of unfailed samples. Thus the resulting data were truncated. This paper describes the analysis of the resulting data and the methods that were used to estimate the shelf life distribution of the product

    Strategy for Planning Accelerated Life Tests with Small Sample Sizes

    Get PDF
    Previous work on planning accelerated life tests has been based on large-sample approximations to evaluate test plan properties. In this paper, we use more accurate simulation methods to investigate the properties of accelerated life tests with small sample sizes where large-sample approximations might not be expected to be adequate. These properties include the simulated s-bias and variance for quantiles of the failure-time distribution at use conditions. We focus on using these methods to find practical compromise test plans that use three levels of stress. We also study the effects of not having any failures at test conditions and the effect of using incorrect planning values. We note that the large-sample approximate variance is far from adequate when the probability of zero failures at certain test conditions is not negligible. We suggest a strategy to develop useful test plans using a small number of test units while meeting constraints on the estimation precision and on the probability that there will be zero failures at one or more of the test stress levels
    corecore