358 research outputs found

    Predicting Failure times for some Unobserved Events with Application to Real-Life Data

    Get PDF
    This study aims to predict failure times for some units in some lifetime experiments. In some practical situations, the experimenter may not be able to register the failure times of all units during the experiment. Recently, this situation can be described by a new type of censored data called multiply-hybrid censored data. In this paper, the linear failure rate distribution is well-fitted to some real-life data and hence some statistical inference approaches are applied to estimate the distribution parameters. A two-sample prediction approach applied to extrapolate a new sample simulates the observed data for predicting the failure times for the unobserved units

    A Simulation of Data Censored Rigth Type I with Weibull Distribution

    Get PDF
    In the maintenance and reliability field, there are frequent analyses with data being censored. In reliability research, many articles do simulation, but few explain how they do it. the loss of information resulting from the unavailable exact failure times will impact negatively the efficiency of reliability analysis. This paper presents four different algorithms to generate random data with a different number of censored values. The four algorithms are compared, and tree parameters are used to select the best one. The Weibull distribution is used to generate the random numbers because it is one of the most used in reliability studies. The results of the algorithm chosen are very relevant; with a sample of n = 50 and a number of cycles of simulations m = 1000, the standard deviation is higher when the shape factor of Weibull distribution is beta = 0.5 and slowly decreases until the shape factor equals 5. The percentage error (PE), one of the indicators selected, is much higher when the percentage of censored data is c = 5%, then goes down when the shape factor increases. There is a different behaviour when censored data is C = 20% and the percentage error (PE) is higher when shape factor is beta = 1.5. This article presents an algorithm that it considers the best for simulating right-censored type-I data. The algorithm has excellent accuracy, random data i.i.d and excellent computational performance.info:eu-repo/semantics/publishedVersio

    Estimating the Parameter of Exponential Distribution under Type II Censoring From Fuzzy Data

    Get PDF
    The problem of estimating the parameter of Exponential distribution on the basis of type II censoring scheme is considered when the available data are in the form of fuzzy numbers. The Bayes estimate of the unknown parameter is obtained by using the approximation forms of Lindley (1980) and Tierney and Kadane (1986) under the assumption of gamma prior. The highest posterior density (HPD) estimate of the parameter of interest is found. A Monte Carlo simulation is used to compare the performances of the different methods. A real data set is investigated to illustrate the applicability of the proposed methods

    Confidence Intervals for the Scaled Half-Logistic Distribution under Progressive Type-II Censoring

    Get PDF
    Confidence interval construction for the scale parameter of the half-logistic distribution is considered using four different methods. The first two are based on the asymptotic distribution of the maximum likelihood estimator (MLE) and log-transformed MLE. The last two are based on pivotal quantity and generalized pivotal quantity, respectively. The MLE for the scale parameter is obtained using the expectation-maximization (EM) algorithm. Performances are compared with the confidence intervals proposed by Balakrishnan and Asgharzadeh via coverage probabilities, length, and coverage-to-length ratio. Simulation results support the efficacy of the proposed approach

    Imprecise Statistical Methods for Accelerated Life Testing

    Get PDF
    Accelerated Life Testing (ALT) is frequently used to obtain information on the lifespan of devices. Testing items under normal conditions can require a great deal of time and expense. To determine the reliability of devices in a shorter period of time, and with lower costs, ALT can often be used. In ALT, a unit is tested under levels of physical stress (e.g. temperature, voltage, or pressure) greater than the unit will experience under normal operating conditions. Using this method, units tend to fail more quickly, requiring statistical inference about the lifetime of the units under normal conditions via extrapolation based on an ALT model. This thesis presents a novel method for statistical inference based on ALT data. The method quantifies uncertainty using imprecise probabilities, in particular it uses Nonparametric Predictive Inference (NPI) at the normal stress level, combining data from tests at that level with data from higher stress levels which have been transformed to the normal stress level. This has been achieved by assuming an ALT model, with the relation between different stress levels modelled by a simple parametric link function. We derive an interval for the parameter of this link function, based on the application of classical hypothesis tests and the idea that, if data from a higher stress level are transformed to the normal stress level, then these transformed data and the original data from the normal stress level should not be distinguishable. In this thesis we consider two scenarios of the methods. First, we present this approach with the assumption of Weibull failure time distributions at each stress level using the likelihood ratio test to obtain the interval for the parameter of the link function. Secondly, we present this method without an assumed parametric distribution at each stress level, and using a nonparametric hypothesis test to obtain the interval. To illustrate the possible use of our new statistical method for ALT data, we present an application to support decisions on warranties. A warranty is a contractual commitment between consumer and producer, in which the latter provides post-sale services in case of product failure. We will consider pricing basic warranty contracts based on the information from ALT data and the use of our novel imprecise probabilistic statistical method

    Component Reliability Estimation From Partially Masked and Censored System Life Data Under Competing Risks.

    Get PDF
    This research presents new approaches to the estimation of component reliability distribution parameters from partially masked and/or censored system life data. Such data are common in continuous production environments. The methods were tested on Monte Carlo simulated data and compared to the only alternative suggested in literature. This alternative did not converge on many masked datasets. The new methods produce accurate parameter estimates, particularly at low masking levels. They show little bias. One method ignores masked data and treats them as censored observations. It works well if at least 2 known-cause failures of each component type have been observed and is particularly useful for analysis of any size datasets with a small fraction of masked observations. It provides quick and accurate estimates. A second method performs well when the number of masked observations is small but forms a significant portion of the dataset and/or when the assumption of independent masking does not hold. The third method provides accurate estimates when the dataset is small but contains a large fraction of masked observations and when independent masking is assumed. The latter two methods provide an indication which component most likely caused each masked system failure, albeit at the price of much computation time. The methods were implemented in user-friendly software that can be used to apply the method on simulated or real-life data. An application of the methods to real-life industrial data is presented. This research shows that masked system life data can be used effectively to estimate component life distribution parameters in a situation where such data form a large portion of the dataset and few known failures exist. It also demonstrates that a small fraction of masked data in a dataset can safely be treated as censored observations without much effect on the accuracy of the resulting estimates. These results are important as masked system life data are becoming more prevalent in industrial production environments. The research results are gauged to be useful in continuous manufacturing environments, e.g. in the petrochemical industry. They will also likely interest the electronics and automotive industry where masked observations are common

    Bayesian correction for covariate measurement error: a frequentist evaluation and comparison with regression calibration

    Get PDF
    Bayesian approaches for handling covariate measurement error are well established, and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration (RC), arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to RC, and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next we describe the closely related maximum likelihood and multiple imputation approaches, and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of RC and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey
    • …
    corecore