425 research outputs found

    Estimation of Inverse Weibull Distribution Under Type-I Hybrid Censoring

    Get PDF
    The hybrid censoring is a mixture of Type I and Type II censoring schemes. This paper presents the statistical inferences of the Inverse Weibull distribution when the data are Type-I hybrid censored. First we consider the maximum likelihood estimators of the unknown parameters. It is observed that the maximum likelihood estimators can not be obtained in closed form. We further obtain the Bayes estimators and the corresponding highest posterior density credible intervals of the unknown parameters under the assumption of independent gamma priors using the importance sampling procedure. We also compute the approximate Bayes estimators using Lindley's approximation technique. We have performed a simulation study and a real data analysis in order to compare the proposed Bayes estimators with the maximum likelihood estimators.Comment: This paper is under review in the Austrian Journal of Statistics and will likely be published ther

    Parameter Estimation and Prediction of Future Failures in the Log-Logistic Distributions Based on Hybrid-Censored Data

    Get PDF
    The main purpose of this thesis is to study the prediction of future observations of a Log-Logistic distribution from Hybrid Censored Samples. We will study parameter point estimation, interval estimation, different point predictors will be formed such as Maximum Likelihood Predictor (MLP), Best Unbiased Predictor (BUP), and Conditional Median Predictor (CMP). Different Prediction intervals will be constructed such as Intervals based on Pivotal quantities, and High-Density Intervals (HDI). A simulation study will be run using the R software to investigate and compare the performance of all point predictors and prediction intervals. It is observed that the (BUP) is the best point predictor and the (HDI) is the best prediction interval

    New statistical metliods in risk assessment by probability bounds

    Get PDF
    In recent years, we have seen a diverse range of crises and controversies concerning food safety, animal health and environmental risks including foot and mouth disease, dioxins in seafood, GM crops and more recently the safety of Irish pork. This has led to the recognition that the handling of uncertainty in risk assessments needs to be more rigorous and transparent. This would mean that decision makers and the public could be better informed on the limitations of scientific advice. The expression of the uncertainty may be qualitative or quantitative but it must be well documented. Various approaches to quantifying uncertainty exist, but none are yet generally accepted amongst mathematicians, statisticians, natural scientists and regulatory authorities. In this thesis we discuss the current risk assessment guidelines which describe the deterministic methods that are mainly used for risk assessments. However, probabilistic methods have many advantages, and we review some probabilistic methods that have been proposed for risk assessment. We then develop our own methods to overcome some problems with the current methods. We consider including various uncertainties and looking at robustness to the prior distribution for Bayesian methods. We compare nonparametric methods with parametric methods and we combine a nonparametric method with a Bayesian method to investigate the effect of using different assumptions for different random quantities in a model. These new methods provide alternatives for risk analysts to use in the future

    New methods for modelling EQ-5D-5L value sets: an application to English data

    Get PDF
    Background: The EQ-5D is a widely used questionnaire that describes and values health related quality of life. Recently, a five level version was developed. Updated methods to estimate values for all health states are required. Data: 996 respondents representative of the English general population completed Time Trade-Off (TTO) and Discrete Choice Experiment (DCE) tasks. Methods: We estimate models, with and without interactions, using DCE data only; TTO data only; and TTO/DCE data combined. TTO data are interpreted as both left and right censored. Heteroskedasticity and preference heterogeneity between individuals is accounted for. We use maximum likelihood estimation in combination with Bayesian methods. The final model is chosen using the deviance information criterion (DIC). Results: Censoring and taking account of heteroskedasticity has important effects on parameter estimation. Regarding DCE, models with different dimension parameters and similar level parameters are best. Considering models for both TTO and DCE/TTO combined, models with parameters for all dimensions and levels perform best, as judged by the DIC. Taking account of heterogeneity improves fit, and a three latent group multinomial model has the lowest DIC. Conclusion: Studies to elicit values for the EQ-5D-5L need new approaches to estimate the underlying value function. This paper presents approaches which suit the characteristics of these data and recognise preference heterogeneity

    Knowledge Discovery from Complex Event Time Data with Covariates

    Get PDF
    In particular engineering applications, such as reliability engineering, complex types of data are encountered which require novel methods of statistical analysis. Handling covariates properly while managing the missing values is a challenging task. These type of issues happen frequently in reliability data analysis. Specifically, accelerated life testing (ALT) data are usually conducted by exposing test units of a product to severer-than-normal conditions to expedite the failure process. The resulting lifetime and/or censoring data are often modeled by a probability distribution along with a life-stress relationship. However, if the probability distribution and life-stress relationship selected cannot adequately describe the underlying failure process, the resulting reliability prediction will be misleading. To seek new mathematical and statistical tools to facilitate the modeling of such data, a critical question to be asked is: Can we find a family of versatile probability distributions along with a general life-stress relationship to model complex lifetime data with covariates? In this dissertation, a more general method is proposed for modeling lifetime data with covariates. Reliability estimation based on complete failure-time data or failure-time data with certain types of censoring has been extensively studied in statistics and engineering. However, the actual failure times of individual components are usually unavailable in many applications. Instead, only aggregate failure-time data are collected by actual users due to technical and/or economic reasons. When dealing with such data for reliability estimation, practitioners often face challenges of selecting the underlying failure-time distributions and the corresponding statistical inference methods. So far, only the Exponential, Normal, Gamma and Inverse Gaussian (IG) distributions have been used in analyzing aggregate failure-time data because these distributions have closed-form expressions for such data. However, the limited choices of probability distributions cannot satisfy extensive needs in a variety of engineering applications. Phase-type (PH) distributions are robust and flexible in modeling failure-time data as they can mimic a large collection of probability distributions of nonnegative random variables arbitrarily closely by adjusting the model structures. In this paper, PH distributions are utilized, for the first time, in reliability estimation based on aggregate failure-time data. To this end, a maximum likelihood estimation (MLE) method and a Bayesian alternative are developed. For the MLE method, an expectation-maximization (EM) algorithm is developed to estimate the model parameters, and the corresponding Fisher information is used to construct the confidence intervals for the quantities of interest. For the Bayesian method, a procedure for performing point and interval estimation is also introduced. Several numerical examples show that the proposed PH-based reliability estimation methods are quite flexible and alleviate the burden of selecting a probability distribution when the underlying failure-time distribution is general or even unknown

    Variable Selection in Accelerated Failure Time (AFT) Frailty Models: An Application of Penalized Quasi-Likelihood

    Get PDF
    Variable selection is one of the standard ways of selecting models in large scale datasets. It has applications in many fields of research study, especially in large multi-center clinical trials. One of the prominent methods in variable selection is the penalized likelihood, which is both consistent and efficient. However, the penalized selection is significantly challenging under the influence of random (frailty) covariates. It is even more complicated when there is involvement of censoring as it may not have a closed-form solution for the marginal log-likelihood. Therefore, we applied the penalized quasi-likelihood (PQL) approach that approximates the solution for such a likelihood. In addition, we introduce an adaptive penalty function that makes the selection on both fixed and frailty effects in a left-censored dataset for a parametric AFT frailty model. We also compared our penalty function with other established procedures via their performance on accurately choosing the significant coefficients and shrinking the non-significant coefficients to zero

    Survival Regression Models With Dependent Bayesian Nonparametric Priors

    Get PDF
    We present a novel Bayesian nonparametric model for regression in survival analysis. Our model builds on the classical neutral to the right model of Doksum and on the Cox proportional hazards model of Kim and Lee. The use of a vector of dependent Bayesian nonparametric priors allows us to efficiently model the hazard as a function of covariates while allowing nonproportionality. The model can be seen as having competing latent risks. We characterize the posterior of the underlying dependent vector of completely random measures and study the asymptotic behavior of the model. We show how an MCMC scheme can provide Bayesian inference for posterior means and credible intervals. The method is illustrated using simulated and real data. Supplementary materials for this article are available online

    Bayesian Analysis of Censored Spatial Data Based on a Non-Gaussian Model

    Full text link
    corecore