9 research outputs found

    Non-destructive one-shot device testing under step-stress model with Weibull lifetime distributions

    Full text link
    One-shot devices are product or equipment that can be used only once, so they get destroyed when tested. However, the destructiveness assumption may not be necessary in many practical applications such as assessing the effect of temperature on some electronic components, yielding to the so called non-destructive one-shot devices. Further, one-shot devices generally have large mean lifetime to failure, and so accelerated life tests (ALTs) must be performed for inference. The step-stress ALT shorten the lifetime of the products by increasing the stress level at which units are subjected to progressively at pre-specified times. Then, the non-destructive devices are tested at certain inspection times and surviving units can continue within the experiment providing extra information. Classical estimation methods based on the maximum likelihood estimator (MLE) enjoy suitable asymptotic properties but they lack of robustness. In this paper, we develop robust inferential methods for non-destructive one-shot devices based on the popular density power divergence (DPD) for estimating and testing under the step-stress model with Weibull lifetime distributions. We theoretically and empirically examine the asymptotic and robustness properties of the minimum DPD estimators and Wald-type test statistics based on them. Moreover, we develop robust estimators and confidence intervals for some important lifetime characteristics, namely reliability at certain mission times, distribution quantiles and mean lifetime of a device. Finally, we analyze the effect of temperature in three electronic components, solar lights, medium power silicon bipolar transistors and LED lights using real data arising from an step-stress ALT

    Data Analysis and Experimental Design for Accelerated Life Testing with Heterogeneous Group Effects

    Get PDF
    abstract: In accelerated life tests (ALTs), complete randomization is hardly achievable because of economic and engineering constraints. Typical experimental protocols such as subsampling or random blocks in ALTs result in a grouped structure, which leads to correlated lifetime observations. In this dissertation, generalized linear mixed model (GLMM) approach is proposed to analyze ALT data and find the optimal ALT design with the consideration of heterogeneous group effects. Two types of ALTs are demonstrated for data analysis. First, constant-stress ALT (CSALT) data with Weibull failure time distribution is modeled by GLMM. The marginal likelihood of observations is approximated by the quadrature rule; and the maximum likelihood (ML) estimation method is applied in iterative fashion to estimate unknown parameters including the variance component of random effect. Secondly, step-stress ALT (SSALT) data with random group effects is analyzed in similar manner but with an assumption of exponentially distributed failure time in each stress step. Two parameter estimation methods, from the frequentist’s and Bayesian points of view, are applied; and they are compared with other traditional models through simulation study and real example of the heterogeneous SSALT data. The proposed random effect model shows superiority in terms of reducing bias and variance in the estimation of life-stress relationship. The GLMM approach is particularly useful for the optimal experimental design of ALT while taking the random group effects into account. In specific, planning ALTs under nested design structure with random test chamber effects are studied. A greedy two-phased approach shows that different test chamber assignments to stress conditions substantially impact on the estimation of unknown parameters. Then, the D-optimal test plan with two test chambers is constructed by applying the quasi-likelihood approach. Lastly, the optimal ALT planning is expanded for the case of multiple sources of random effects so that the crossed design structure is also considered, along with the nested structure.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Contributions to planning and analysis of accelerated testing

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Use of Response Surfaces in the Design of a Simple Step Stress Accelerated Test Plan

    Get PDF
    In designing accelerated testing plans, cost is a factor that is missing in much of the literature. This paper explicits considers cost by developing an optimization model with the objective to minimize costs for a simple step stress accelerated test plan. Two methodologies are employed. One is an optimization approach in which an attempt is made to quantify the behavior of a series-parallel hardware system over all stages of testing using a response surface, and then an optimization model is used to determine the settings for stresses and failure mode modifications for all stages of testing prior to the start of testing. The second methodology or sequential stage approach is to generate a response surface using data from a completed test stage to determine the settings of stresses and failure mode modifications for the next stage. Then this process is repeated for all stages of testing. When validating the results of the optimization model through simulation, the model overestimated costs. Assuming the simulated optimal settings are the true value of cost, the sequential approach produced suboptimal results. This is because each stage of testing results in narrowing the search parameters of a solution. However, it was found that the sequential stage approach had similar costs to that of the optimization model. Although the optimization model has a better solution, it requires much more data initially whereas the sequential stage approach does not require information about the system for all stages prior to testing. If information of the system\u27s behavior is known for all stages prior to testing, then the optimization approach is more advantageous, yet most cases have limited information so the sequential stage approach should be utilized

    Planning and inference of sequential accelerated life tests

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Bayesian accelerated life tests for the Weibull distribution under non-informative priors

    Get PDF
    In a competitive world where products are designed to last for long periods of time, obtaining time-to-failure data is both difficult and costly. Hence for products with high reliability, accelerated life testing is required to obtain relevant life-data quickly. This is done by placing the products under higher-than-use stress levels, thereby causing the products to fail prematurely. Part of the analysis of accelerated life-data requires a life distribution that describes the lifetime of a product at a given stress level and a life-stress relationship – which is some function that describes the way in which the life distribution changes across different stress levels. In this thesis it is assumed that the underlying life distribution is the wellknown Weibull distribution, with shape parameter constant over all stress levels and scale parameter as a log-linear function of stress. The primary objective of this thesis is to obtain estimates from Bayesian analysis, and this thesis considers five types of non-informative prior distributions: Jeffreys’ prior, reference priors, maximal data information prior, uniform prior and probability matching priors. Since the associated posterior distribution under all the derived non-informative priors are of an unknown form, the propriety of the posterior distributions is assessed to ensure admissible results. For comparison purposes, estimates obtained via the method of maximum likelihood are also considered. Finding these estimates requires solving non-linear equations, hence the Newton-Raphson algorithm is used to obtain estimates. A simulation study based on the time-to-failure of accelerated data is conducted to compare results between maximum likelihood and Bayesian estimates. As a result of the Bayesian posterior distributions being analytically intractable, two methods to obtain Bayesian estimates are considered: Markov chain Monte Carlo methods and Lindley’s approximation technique. In the simulation study the posterior means and the root mean squared error values of the estimates under the symmetric squared error loss function and the two asymmetric loss functions: the LINEX loss function and general entropy loss function, are considered. Furthermore the coverage rates for the Bayesian Markov chain Monte Carlo and maximum likelihood estimates are found, and are compared by their average interval lengths. A case study using a dataset based on accelerated time-to-failure of an insulating fluid is considered. The fit of these data for the Weibull distribution is studied and is compared to that of other popular life distributions. A full simulation study is conducted to illustrate convergence of the proper posterior distributions. Both maximum likelihood and Bayesian estimates are found for these data. The deviance information criterion is used to compare Bayesian estimates between the prior distributions. The case study is concluded by finding reliability estimates of the data at use-stress levels

    Bayesian accelerated life tests for the Weibull distribution under non-informative priors

    Get PDF
    In a competitive world where products are designed to last for long periods of time, obtaining time-to-failure data is both difficult and costly. Hence for products with high reliability, accelerated life testing is required to obtain relevant life-data quickly. This is done by placing the products under higher-than-use stress levels, thereby causing the products to fail prematurely. Part of the analysis of accelerated life-data requires a life distribution that describes the lifetime of a product at a given stress level and a life-stress relationship – which is some function that describes the way in which the life distribution changes across different stress levels. In this thesis it is assumed that the underlying life distribution is the wellknown Weibull distribution, with shape parameter constant over all stress levels and scale parameter as a log-linear function of stress. The primary objective of this thesis is to obtain estimates from Bayesian analysis, and this thesis considers five types of non-informative prior distributions: Jeffreys’ prior, reference priors, maximal data information prior, uniform prior and probability matching priors. Since the associated posterior distribution under all the derived non-informative priors are of an unknown form, the propriety of the posterior distributions is assessed to ensure admissible results. For comparison purposes, estimates obtained via the method of maximum likelihood are also considered. Finding these estimates requires solving non-linear equations, hence the Newton-Raphson algorithm is used to obtain estimates. A simulation study based on the time-to-failure of accelerated data is conducted to compare results between maximum likelihood and Bayesian estimates. As a result of the Bayesian posterior distributions being analytically intractable, two methods to obtain Bayesian estimates are considered: Markov chain Monte Carlo methods and Lindley’s approximation technique. In the simulation study the posterior means and the root mean squared error values of the estimates under the symmetric squared error loss function and the two asymmetric loss functions: the LINEX loss function and general entropy loss function, are considered. Furthermore the coverage rates for the Bayesian Markov chain Monte Carlo and maximum likelihood estimates are found, and are compared by their average interval lengths. A case study using a dataset based on accelerated time-to-failure of an insulating fluid is considered. The fit of these data for the Weibull distribution is studied and is compared to that of other popular life distributions. A full simulation study is conducted to illustrate convergence of the proper posterior distributions. Both maximum likelihood and Bayesian estimates are found for these data. The deviance information criterion is used to compare Bayesian estimates between the prior distributions. The case study is concluded by finding reliability estimates of the data at use-stress levels

    Estimation of water surface elevation probabilities and associated damages for the Great Salt Lake

    Get PDF
    Rising water surface elevations in perennial terminal lakes threaten major damages to shoreline industrial plants, transportation routs, and wetlands. Falling elevations increase pumping costs for industries extracting minerals from the lake water and reduce the quality of shoreline recreation. The managers of these properties need information on future lake level probabilities for planning, and public agencies need information on both probabilities and damages to determine whether lake level control is justified. Standard methods for estimating flood frequency and damages in riverine areas do not work well for terminal lakes because of the interdependency in annual peaks and the long advanced warning and duration of flood events. For this reasons, the methods of operational hydrology were use to simulate lake level and shoreline damage sequences for the Great Salt Lake. Both ARMA (1,0) and ARMS (1,1) models were tried in generating multivariate sequences of precipitation, evaporation, and three river flows for 1937-1977. The multivariate Markov model was the only one able to preserve historical sequences, but recommendations for improved parameter solution techniques for the ARMA (1,) model are made to help future users take better advantages of its theoretically greater ability to preserve hydrologic persistence. The Markov model was used to generate 100 and 125 year lake sequences as inputs to a lake water balance model which used them to generate 125 year lake stage sequences. The generated sequences showed lake level probabilities for current land and water use conditions the tributary area to be affected by known present conditions for about 35 years after which they stabilize in a normal distribution of mean 4196.42 and standard deviation of 4.56. The one-percent high event has a value of 4207.0, and the one-percent low event is 4191.5, and the amount by which these values exceed the forecast stages is indicative of the long term downward trend in lake stage caused by increasing upstream water use. The model developed with the capability of estimating low future lake level probabilities would be affected by upstream water development and by pumping water from the lake during high stages into the western desert. Data on damages to 21 cost centers were collected, and a damage simulation model was developed to use them to estimate average annual damages under current conditions and benefits from lake level control efforts. Averages annual damages to the mineral industry, railroads, highways, wetlands, and other properties were estimated to be currently $1,550,000. The computer programs for multivariate stochastic flow generation, lake water level simulation, and damage estimation are reproduced and documented in the appendices. The models will be available for future use in re-estimating probabilities and damages as initial lake stages and lake use conditions change, additional years of input data are collected, and the state of the art stochastic flow generation is refined

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments
    corecore