24 research outputs found

    Shrinkage Estimation and Prediction for Joint Type-II Censored Data from Two Burr-XII Populations

    Full text link
    The main objective of this paper is to apply linear and pretest shrinkage estimation techniques to estimating the parameters of two 2-parameter Burr-XII distributions. Further more, predictions for future observations are made using both classical and Bayesian methods within a joint type-II censoring scheme. The efficiency of shrinkage estimates is compared to maximum likelihood and Bayesian estimates obtained through the expectation-maximization algorithm and importance sampling method, as developed by Akbari Bargoshadi et al. (2023) in "Statistical inference under joint type-II censoring data from two Burr-XII populations" published in Communications in Statistics-Simulation and Computation". For Bayesian estimations, both informative and non-informative prior distributions are considered. Additionally, various loss functions including squared error, linear-exponential, and generalized entropy are taken into account. Approximate confidence, credible, and highest probability density intervals are calculated. To evaluate the performance of the estimation methods, a Monte Carlo simulation study is conducted. Additionally, two real datasets are utilized to illustrate the proposed methods.Comment: 33 pages and 33 table

    Bayesian One Sample Prediction of Future GOS’s From A Class of Finite Mixture Distributions Based On Generalized Type-I Hybrid Censoring Scheme

    Get PDF
    In this paper, the Bayesian prediction intervals for a future gos’s from a mixture of two components from a class of continuous distributions under generalized Type-I hybrid censoring scheme are computed. We consider the one sample prediction technique. A mixture of two Weibull components model is given as an application. Our results are specialized to upper order statistics and upper record values. The results obtained by using the Markov Chain Monte Carlo (MCMC) algorithm.Keywords: Generalized order statistics; Bayesian prediction; One-sample scheme; Finite mixtures; Generalized Type-I hybrid censoring scheme; MCMC algorithm

    Modified weibull distributions in reliability engineering

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Vol. 15, No. 2 (Full Issue)

    Get PDF

    Safety and Reliability - Safe Societies in a Changing World

    Get PDF
    The contributions cover a wide range of methodologies and application areas for safety and reliability that contribute to safe societies in a changing world. These methodologies and applications include: - foundations of risk and reliability assessment and management - mathematical methods in reliability and safety - risk assessment - risk management - system reliability - uncertainty analysis - digitalization and big data - prognostics and system health management - occupational safety - accident and incident modeling - maintenance modeling and applications - simulation for safety and reliability analysis - dynamic risk and barrier management - organizational factors and safety culture - human factors and human reliability - resilience engineering - structural reliability - natural hazards - security - economic analysis in risk managemen

    Full Bayesian Methods to Handle Missing Data in Health Economic Evaluation

    Get PDF
    Trial-based economic evaluations are performed on individual-level data, which almost invariably contain missing values. Missingness represents a threat for the analysis because any statistical method makes assumptions about the unobserved values that cannot be verified from the data at hand; when these assumptions are not realistic, they could lead to biased inferences and mislead the cost-effectiveness assessment. We start by investigating the current missing data handling in economic evaluations and provide recommendations about how information about missingness and related methods should be reported in the analysis. We illustrate the pitfalls and issues that affect the methods used in routine analyses, which typically do not account for the intrinsic complexities of the data and rarely include sensitivity analysis to the missingness assumptions. We propose to overcome these problems using a full Bayesian approach. We use two case studies to demonstrate the benefits of our approach, which allows for a flexible specification of the model to jointly handle the complexities of the data and the uncertainty around the missing values. Finally, we present a longitudinal bivariate model to handle nonignorable missingness. The model extends the standard approach by accounting for all observed data, for which a flexible parametric model is specified. Missing data are handled through a combination of identifying restrictions and sensitivity parameters. First, a benchmark scenario is specified and then plausible nonignorable departures are assessed using alternative prior distributions on the sensitivity parameters. The model is applied to and motivated by one of the two case studies considered
    corecore