187 research outputs found

    Accelerated Destructive Degradation Tests Robust to Distribution Misspecification

    Get PDF
    Accelerated repeated-measures degradation tests (ARMDTs) take measurements of degradation or performance on a sample of units over time. In certain products, measurements are destructive leading to accelerated destructive degradation test (ADDT) data. For example, the test of a adhesive bond needs to break the test specimen to measure the strength of the bond. Lognormal and Weibull distributions are often used to describe the distribution of product characteristics in life and degradation tests. When the distribution is misspecified, the lifetime quantile, often of interest to the practitioner, may differ significantly between these two distributions. In this study, under a specific ADDT, we investigate the bias and variance due to distribution misspecification. We suggest robust test plans under the criteria of minimizing the approximate mean square error

    Methods for planning repeated measures degradation tests

    Get PDF
    The failure mechanism of an item often can be linked directly to some sort of degradation process. This degradation process eventually weakens the item which then induces a failure. As system components have become highly reliable, traditional life tests, where the response is time to failure, provide few or no failures during the life of a study. For such situations, degradation data can sometimes provide more information for assessing the item\u27s reliability. Repeated measures degradation is a form of degradation where the engineers are able to make multiple nondestructive measurements of the item\u27s level of degradation. For some items, however, the degradation rates at nominal use conditions are so low that no meaningful information can be extracted. Thus the engineers will use accelerating methods to increase the degradation rate. Before a test can be performed, the engineers need to know the number of items to test, the points of time to make the measurements, and at what values of the accelerating variable should the units be exposed in order to achieve the best estimation precision possible. In this thesis we study the test planning methods for designing repeated measures degradation and accelerated degradation tests. First, Chapter 2 provides methods for selecting the number of units and the number of measurements per unit for repeated measures degradation tests without acceleration. Selection of these testing parameters is based on the asymptotic standard error of an estimator of a function of the model parameters. These methods can also be used to assess how the estimation precision changes as a function of the number of units and measurements per items. Chapter 3 describes methods for planning repeated measures accelerated degradation tests (RMADTs) where the engineers need to know the accelerated conditions at which the items should be tested. Chapter 4 is similar to Chapter 3, but uses a Bayesian approach for planning RMADTs

    New Developments in Planning Accelerated Life Tests

    Get PDF
    Accelerated life tests (ALTs) are often used to make timely assessments of the life time distribution of materials and components. The goal of many ALTs is estimation of a quantile of a log-location failure time distribution. Much of the previous work on planning accelerated life tests has focused on deriving test-planning methods under a specific log-location distribution. This thesis presents a new approach for computing approximate large-sample variances of maximum likelihood estimators of a quantile of general log-location distribution with censoring and time-varying stress based on a cumulative exposure model. This thesis also presents a strategy to develop useful test plans using a small number of test units. We provide an approach to find optimum step-stress accelerated life test plans by using the large-sample approximate variance of the maximum likelihood estimator of a quantile of the failure time distribution at use conditions from a step-stress accelerated life test. In Chapter 2, we show this approach allows for multi-step stress changes and censoring for general log-location-scale distributions. As an application of this approach, the optimum variance is studied as a function of shape parameter for both Weibull and lognormal distributions. Graphical comparisons among test plans using step-up, step-down, and constant-stress patterns are also presented. The results show that, depending on the values of the model parameters and quantile of interest, each of the three test plans can be preferable in terms of optimum variance. In Chapter 3, using sample data from a published paper describing optimum ramp-stress test plans, we show that our approach and the one used in the previous work give the same variance-covariance matrix of the quantile estimator from the two different approaches. Then, as an application of this approach, we extend the previous work to a new optimum ramp-stress test plan obtained by simultaneously adjusting the ramp rate and the lower start level of stress. We find that the new optimum test plan can have smaller variances than that of the optimum ramp-stress test plan previously obtained by adjusting only the ramp rate. We also compare optimum ramp-stress test plans with the more commonly used constant-stress accelerated life test plans. Previous work on planning accelerated life tests has been based on large-sample approximations to evaluate test plan properties. In Chapter 4, we use more accurate simulation methods to investigate the properties of accelerated life tests with small sample sizes where large-sample approximations might not be expected to be adequate. These properties include the simulated bias and variance for quantiles of the failure-time distribution at use conditions. We focus on using these methods to find practical compromise test plans that use three levels of stress. We also study the effects of not having any failures at test conditions and the effect of using incorrect planning values. We note that the large-sample approximate variance is far from adequate when the probability of zero failures at certain test conditions is not negligible. We suggest a strategy to develop useful test plans using a small number of test units while meeting constraints on the estimation precision and on the probability that there will be zero failures at one or more of the test stress levels

    Accelerated degradation tests planning with competing failure modes

    Get PDF
    Accelerated degradation tests (ADT) have been widely used to assess the reliability of products with long lifetime. For many products, environmental stress not only accelerates their degradation rate but also elevates the probability of traumatic shocks. When random traumatic shocks occur during an ADT, it is possible that the degradation measurements cannot be taken afterward, which brings challenges to reliability assessment. In this paper, we propose an ADT optimization approach for products suffering from both degradation failures and random shock failures. The degradation path is modeled by a Wiener process. Under various stress levels, the arrival process of random shocks is assumed to follow a nonhomogeneous Poisson process. Parameters of acceleration models for both failure modes need to be estimated from the ADT. Three common optimality criteria based on the Fisher information are considered and compared to optimize the ADT plan under a given number of test units and a predetermined test duration. Optimal two- and three-level optimal ADT plans are obtained by numerical methods. We use the general equivalence theorems to verify the global optimality of ADT plans. A numerical example is presented to illustrate the proposed methods. The result shows that the optimal ADT plans in the presence of random shocks differ significantly from the traditional ADT plans. Sensitivity analysis is carried out to study the robustness of optimal ADT plans with respect to the changes in planning input

    Models for Data Analysis in Accelerated Reliability Growth

    Get PDF
    This work develops new methodologies for analyzing accelerated testing data in the context of a reliability growth program for a complex multi-component system. Each component has multiple failure modes and the growth program consists of multiple test-fix stages with corrective actions applied at the end of each stage. The first group of methods considers time-to-failure data and test covariates for predicting the final reliability of the system. The time-to-failure of each failure mode is assumed to follow a Weibull distribution with rate parameter proportional to an acceleration factor. Acceleration factors are specific to each failure mode and test covariates. We develop a Bayesian methodology to analyze the data by assigning a prior distribution to each model parameter, developing a sequential Metropolis-Hastings procedure to sample the posterior distribution of the model parameters, and deriving closed form expressions to aggregate component reliability information to assess the reliability of the system. The second group of methods considers degradation data for predicting the final reliability of a system. First, we provide a non-parametric methodology for a single degradation process. The methodology utilizes functional data analysis to predict the mean time-to-degradation function and Gaussian processes to capture unit-specific deviations from the mean function. Second, we develop parametric model for a component with multiple dependent monotone degradation processes. The model considers random effects on the degradation parameters and a parametric life-stress relationship. The assumptions are that degradation increments follow an Inverse Gaussian process and a Copula function captures the dependency between them. We develop a Bayesian and a maximum likelihood procedure for estimating the model parameters using a two-stage process: (1) estimate the parameters of the degradation processes as if they were independent and (2) estimate the parameters of the Copula function using the estimated cumulative distribution function of the observed degradation increments as observed data. Simulation studies show the efficacy of the proposed methodologies for analyzing multi-stage reliability growth data

    Planning and inference of sequential accelerated life tests

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Development of a Prognostic Method for the Production of Undeclared Enriched Uranium

    Get PDF
    As global demand for nuclear energy and threats to nuclear security increase, the need for verification of the peaceful application of nuclear materials and technology also rises. In accordance with the Nuclear Nonproliferation Treaty, the International Atomic Energy Agency is tasked with verification of the declared enrichment activities of member states. Due to the increased cost of inspection and verification of a globally growing nuclear energy industry, remote process monitoring has been proposed as part of a next-generation, information-driven safeguards program. To further enhance this safeguards approach, it is proposed that process monitoring data may be used to not only verify the past but to anticipate the future via prognostic analysis. While prognostic methods exist for health monitoring of physical processes, the literature is absent of methods to predict the outcome of decision-based events, such as the production of undeclared enriched uranium. This dissertation introduces a method to predict the time at which a significant quantity of unaccounted material is expected to be diverted during an enrichment process. This method utilizes a particle filter to model the data and provide a Type III (degradation-based) prognostic estimate of time to diversion of a significant quantity. Measurement noise for the particle filter is estimated using historical data and may be updated with Bayesian estimates from the analyzed data. Dynamic noise estimates are updated based on observed changes in process data. The reliability of the prognostic model for a given range of data is validated via information complexity scores and goodness of fit statistics. The developed prognostic method is tested using data produced from the Oak Ridge Mock Feed and Withdrawal Facility, a 1:100 scale test platform for developing gas centrifuge remote monitoring techniques. Four case studies are considered: no diversion, slow diversion, fast diversion, and intermittent diversion. All intervals of diversion and non-diversion were correctly identified and significant quantity diversion time was accurately estimated. A diversion of 0.8 kg over 85 minutes was detected after 10 minutes and predicted to be 84 minutes and 10 seconds after 46 minutes and 40 seconds with an uncertainty of 2 minutes and 52 seconds
    corecore