2,820 research outputs found

    Improved approximate confidence intervals for censored data

    Get PDF
    This dissertation includes three papers. The first paper compares different procedures to compute confidence intervals for parameters and quantiles of the Weibull distribution for Type I censored data. The methods can be classified into three groups. The first group contains methods based on the commonly-used normal approximation for the distribution of (possibly transformed) studentized maximum likelihood estimators. The second group contains methods based on the likelihood ratio statistic and its modifications. The methods in the third group use a parametric bootstrap approach, including the use of bootstrap-type simulation to calibrate the procedures in the first two groups. We use the Monte Carlo simulation to investigate the finite sample properties of these procedures. Exceptional cases, which are due to problems caused by the Type I censoring, are noted;The second paper extends the results from Jensen (1993) and show that the distribution of signed squared root likelihood ratio statistics can be approximated by its bootstrap distribution up to second order accuracy when data are censored. Similar results apply to likelihood ratio Statistics and Probability; Our simulation study based on Type I censored data and the two parameter Weibull model shows that the bootstrap signed square root likelihood ratio statistics and its modification outperform the other methods like bootstrap-t and BCa in constructing one-sided confidence bounds;The third paper describes existing methods and develops new methods for constructing simultaneous confidence bands for a cumulative distribution function (cdf). Our results are built on extensions of previous work by Cheng and Iles (1983, 1988). A general approach is presented for construction of two-sided simultaneous confidence band for a continuous parametric model cdf from complete and censored data using standard large-sample approximations and then extending and comparing these to corresponding simulation or bootstrap calibrated versions of the same methods. Both two-sided and one-sided simultaneous confidence bands for location-scale parameter model are discussed in detail including situations with complete and censored data. A simulation for the Weibull distribution and Type I censored data is given. We illustrate the implementation of the methods with an application to estimate probability of detection (POD) used to assess nondestructive evaluation (NDE) capability

    Small-sample corrections for score tests in Birnbaum-Saunders regressions

    Full text link
    In this paper we deal with the issue of performing accurate small-sample inference in the Birnbaum-Saunders regression model, which can be useful for modeling lifetime or reliability data. We derive a Bartlett-type correction for the score test and numerically compare the corrected test with the usual score test, the likelihood ratio test and its Bartlett-corrected version. Our simulation results suggest that the corrected test we propose is more reliable than the other tests.Comment: To appear in the Communications in Statistics - Theory and Methods, http://www.informaworld.com/smpp/title~content=t71359723

    Local likelihood estimation of complex tail dependence structures, applied to U.S. precipitation extremes

    Get PDF
    To disentangle the complex non-stationary dependence structure of precipitation extremes over the entire contiguous U.S., we propose a flexible local approach based on factor copula models. Our sub-asymptotic spatial modeling framework yields non-trivial tail dependence structures, with a weakening dependence strength as events become more extreme, a feature commonly observed with precipitation data but not accounted for in classical asymptotic extreme-value models. To estimate the local extremal behavior, we fit the proposed model in small regional neighborhoods to high threshold exceedances, under the assumption of local stationarity, which allows us to gain in flexibility. Adopting a local censored likelihood approach, inference is made on a fine spatial grid, and local estimation is performed by taking advantage of distributed computing resources and the embarrassingly parallel nature of this estimation procedure. The local model is efficiently fitted at all grid points, and uncertainty is measured using a block bootstrap procedure. An extensive simulation study shows that our approach can adequately capture complex, non-stationary dependencies, while our study of U.S. winter precipitation data reveals interesting differences in local tail structures over space, which has important implications on regional risk assessment of extreme precipitation events

    Computational Methods in Survival Analysis

    Get PDF
    Survival analysis is widely used in the fields of medical science, pharmaceutics, reliability and financial engineering, and many others to analyze positive random phenomena defined by event occurrences of particular interest. In the reliability field, we are concerned with the time to failure of some physical component such as an electronic device or a machine part. This article briefly describes statistical survival techniques developed recently from the standpoint of statistical computational methods focussing on obtaining the good estimates of distribution parameters by simple calculations based on the first moment and conditional likelihood for eliminating nuisance parameters and approximation of the likelihoods. The method of partial likelihood (Cox, 1972, 1975) was originally proposed from the view point of conditional likelihood for avoiding estimating the nuisance parameters of the baseline hazards for obtaining simple and good estimates of the structure parameters. However, in case of heavy ties of failure times calculating the partial likelihood does not succeed. Then the approximations of the partial likelihood have been studied, which will be described in the later section and a good approximation method will be explained. We believe that the better approximation method and the better statistical model should play an important role in lessening the computational burdens greatly. --

    Likelihood Based Estimation in the Logistic Model with Time Censored Data

    Get PDF
    Inference procedures based on the likelihood function are considered for the one logistic distribution with time censored data. The finite sample performances of the maximum likelihood estimator as well as the large sample likelihood inferential procedures based on the Wald, the Rao, and the likelihood ratio statistics are investigated. It is found that the obtained from the asymptotic normal distribution of the maximum likelihood estimator are found no accurate. It is found also that interval estimation based on the Wald and Rao statistics need much more sample size than interval estimation based on the likelihood ratio statistics to attain reasonable accuracy

    Statistical Inference for the Modified Weibull Model Based on the Generalized Order Statistics

    Get PDF
    In recent years, a new class of models has been proposed to exhibit bathtub-shaped failure rate functions. The modified Weibull is one of these models, which is a generalization for the Weibull distribution and is capable of modeling bathtub-shaped and increasing failure rate lifetime data. In this paper, conditional inference has been applied to constructing the confidence intervals for its parameters based on the generalized order statistics. For measuring the performance of this approach compared to the Asymptotic Maximum Likelihood estimates (AMLEs), simulations studies have been carried out for different values of sample sizes and shape parameters. The simulation results indicated that the conditional intervals possess good statistical properties and they can perform quite well even when the sample size is extremely small compared to the AMLE intervals. Finally, a numerical example is given to illustrate the confidence intervals developed in this paper

    Predicting the Number of Future Events

    Get PDF
    This paper describes prediction methods for the number of future events from a population of units associated with an on-going time-to-event process. Examples include the prediction of warranty returns and the prediction of the number of future product failures that could cause serious threats to property or life. Important decisions such as whether a product recall should be mandated are often based on such predictions. Data, generally right-censored (and sometimes left truncated and right-censored), are used to estimate the parameters of a time-to-event distribution. This distribution can then be used to predict the number of events over future periods of time. Such predictions are sometimes called within-sample predictions and differ from other prediction problems considered in most of the prediction literature. This paper shows that the plug-in (also known as estimative or naive) prediction method is not asymptotically correct (i.e., for large amounts of data, the coverage probability always fails to converge to the nominal confidence level). However, a commonly used prediction calibration method is shown to be asymptotically correct for within-sample predictions, and two alternative predictive-distributionbased methods that perform better than the calibration method are presented and justified

    A Reliability Case Study on Estimating Extremely Small Percentiles of Strength Data for the Continuous Improvement of Medium Density Fiberboard Product Quality

    Get PDF
    The objective of this thesis is to better estimate extremely small percentiles of strength distributions for measuring failure process in continuous improvement initiatives. These percentiles are of great interest for companies, oversight organizations, and consumers concerned with product safety and reliability. The thesis investigates the lower percentiles for the quality of medium density fiberboard (MDF). The international industrial standard for measuring quality for MDF is internal bond (IB, a tensile strength test). The results of the thesis indicated that the smaller percentiles are crucial, especially the first percentile and lower ones. The thesis starts by introducing the background, study objectives, and previous work done in the area of MDF reliability. The thesis also reviews key components of total quality management (TQM) principles, strategies for reliability data analysis and modeling, information and data quality philosophy, and data preparation steps that were used in the research study. Like many real world cases, the internal bond data in material failure analysis do not follow perfectly the normal distribution. There was evidence from the study to suggest that MDF has potentially different failure modes for early failures. Forcing of the normality assumption may lead to inaccurate predictions and poor product quality. We introduce a novel, forced censoring technique that closer fits the lower tails of strength distributions, where these smaller percentiles are impacted most. In this thesis, such a forced censoring technique is implemented as a software module, using JMP® Scripting Language (JSL) to expedite data processing which is key for real-time manufacturing settings. Results show that the Weibull distribution models the data best and provides percentile estimates that are neither too conservative nor risky. Further analyses are performed to build an accelerated common-shaped Weibull model for these two product types using the JMP® Survival and Reliability platform. The use of the JMP® Scripting Language helps to automate the task of fitting an accelerated Weibull model and test model homogeneity in the shape parameter. At the end of modeling stage, a package script is written to readily provide the field engineers customized reporting for model visualization, parameter estimation, and percentile forecasting. Furthermore, using the powerful tools of Splida and S Plus, bootstrap estimates of the small percentiles demonstrate improved intervals by our forced censoring approach and the fitted model, including the common shape assumption. Additionally, relatively more advanced Bayesian methods are employed to predict the low percentiles of this particular product type, which has a rather limited number of observations. Model interpretability, cross-validation strategy, result comparisons, and habitual assessment of practical significance are particularly stressed and exercised throughout the thesis. Overall, the approach in the thesis is parsimonious and suitable for real time manufacturing settings. The approach follows a consistent strategy in statistical analysis which leads to more accuracy for product conformance evaluation. Such an approach may also potentially reduce the cost of destructive testing and data management due to reduced frequency of testing. If adopted, the approach may prevent field failures and improve product safety. The philosophy and analytical methods presented in the thesis also apply to other strength distributions and lifetime data
    corecore