418 research outputs found

    Predicting Failure times for some Unobserved Events with Application to Real-Life Data

    Get PDF
    This study aims to predict failure times for some units in some lifetime experiments. In some practical situations, the experimenter may not be able to register the failure times of all units during the experiment. Recently, this situation can be described by a new type of censored data called multiply-hybrid censored data. In this paper, the linear failure rate distribution is well-fitted to some real-life data and hence some statistical inference approaches are applied to estimate the distribution parameters. A two-sample prediction approach applied to extrapolate a new sample simulates the observed data for predicting the failure times for the unobserved units

    A Simulation of Data Censored Rigth Type I with Weibull Distribution

    Get PDF
    In the maintenance and reliability field, there are frequent analyses with data being censored. In reliability research, many articles do simulation, but few explain how they do it. the loss of information resulting from the unavailable exact failure times will impact negatively the efficiency of reliability analysis. This paper presents four different algorithms to generate random data with a different number of censored values. The four algorithms are compared, and tree parameters are used to select the best one. The Weibull distribution is used to generate the random numbers because it is one of the most used in reliability studies. The results of the algorithm chosen are very relevant; with a sample of n = 50 and a number of cycles of simulations m = 1000, the standard deviation is higher when the shape factor of Weibull distribution is beta = 0.5 and slowly decreases until the shape factor equals 5. The percentage error (PE), one of the indicators selected, is much higher when the percentage of censored data is c = 5%, then goes down when the shape factor increases. There is a different behaviour when censored data is C = 20% and the percentage error (PE) is higher when shape factor is beta = 1.5. This article presents an algorithm that it considers the best for simulating right-censored type-I data. The algorithm has excellent accuracy, random data i.i.d and excellent computational performance.info:eu-repo/semantics/publishedVersio

    Inference procedures for the piecewise exponential model when the data are arbitrarily censored

    Get PDF
    Lifetime data are often subject to complicated censoring mechanisms. In particular, point inspection schedules result in observations for which the exact failure times are known only to fall in an interval. Furthermore, overlapping intervals occur when more than one inspection schedule is employed. While well-known parametric and nonparametric inference procedures exist, the piecewise exponential (PEX) model provides a flexible alternative. The PEX model is characterized by a piecewise-constant hazard function with specified jump points. The jump points may be determined as a function of the data, giving the model a nonparametric interpretation, or according to physical considerations related to the process but independent of the data. Assumptions concerning the shape of the hazard function can be incorporated into the model;The EM algorithm provides a useful method of estimation, particularly as the number of hazard jump points increases. Its convergence is guaranteed even when the MLE lies on the boundary of the parameter space. A version of the EM algorithm is used to construct approximate confidence intervals based on inverting the likelihood ratio test statistic. Asymptotic properties of the PEX estimator are given for certain censoring mechanisms. A Monte Carlo study was done to investigate the effect of a constrained hazard function and of the choice of jump points on the resulting estimate of the survival function. The performance of the likelihood ratio based confidence intervals is also evaluated

    Estimating the Parameter of Exponential Distribution under Type II Censoring From Fuzzy Data

    Get PDF
    The problem of estimating the parameter of Exponential distribution on the basis of type II censoring scheme is considered when the available data are in the form of fuzzy numbers. The Bayes estimate of the unknown parameter is obtained by using the approximation forms of Lindley (1980) and Tierney and Kadane (1986) under the assumption of gamma prior. The highest posterior density (HPD) estimate of the parameter of interest is found. A Monte Carlo simulation is used to compare the performances of the different methods. A real data set is investigated to illustrate the applicability of the proposed methods

    Confidence Intervals for the Scaled Half-Logistic Distribution under Progressive Type-II Censoring

    Get PDF
    Confidence interval construction for the scale parameter of the half-logistic distribution is considered using four different methods. The first two are based on the asymptotic distribution of the maximum likelihood estimator (MLE) and log-transformed MLE. The last two are based on pivotal quantity and generalized pivotal quantity, respectively. The MLE for the scale parameter is obtained using the expectation-maximization (EM) algorithm. Performances are compared with the confidence intervals proposed by Balakrishnan and Asgharzadeh via coverage probabilities, length, and coverage-to-length ratio. Simulation results support the efficacy of the proposed approach

    Component Reliability Estimation From Partially Masked and Censored System Life Data Under Competing Risks.

    Get PDF
    This research presents new approaches to the estimation of component reliability distribution parameters from partially masked and/or censored system life data. Such data are common in continuous production environments. The methods were tested on Monte Carlo simulated data and compared to the only alternative suggested in literature. This alternative did not converge on many masked datasets. The new methods produce accurate parameter estimates, particularly at low masking levels. They show little bias. One method ignores masked data and treats them as censored observations. It works well if at least 2 known-cause failures of each component type have been observed and is particularly useful for analysis of any size datasets with a small fraction of masked observations. It provides quick and accurate estimates. A second method performs well when the number of masked observations is small but forms a significant portion of the dataset and/or when the assumption of independent masking does not hold. The third method provides accurate estimates when the dataset is small but contains a large fraction of masked observations and when independent masking is assumed. The latter two methods provide an indication which component most likely caused each masked system failure, albeit at the price of much computation time. The methods were implemented in user-friendly software that can be used to apply the method on simulated or real-life data. An application of the methods to real-life industrial data is presented. This research shows that masked system life data can be used effectively to estimate component life distribution parameters in a situation where such data form a large portion of the dataset and few known failures exist. It also demonstrates that a small fraction of masked data in a dataset can safely be treated as censored observations without much effect on the accuracy of the resulting estimates. These results are important as masked system life data are becoming more prevalent in industrial production environments. The research results are gauged to be useful in continuous manufacturing environments, e.g. in the petrochemical industry. They will also likely interest the electronics and automotive industry where masked observations are common
    • …
    corecore