26 research outputs found
Evaluation of stability of directly standardized rates for sparse data using simulation methods.
Background
Directly standardized rates (DSRs) adjust for different age distributions in different populations and enable, say, the rates of disease between the populations to be directly compared. They are routinely published but there is concern that a DSR is not valid when it is based on a “small” number of events. The aim of this study was to determine the value at which a DSR should not be published when analyzing real data in England.
Methods
Standard Monte Carlo simulation techniques were used assuming the number of events in 19 age groups (i.e., 0–4, 5–9, ... 90+ years) follow independent Poisson distributions. The total number of events, age specific risks, and the population sizes in each age group were varied. For each of 10,000 simulations the DSR (using the 2013 European Standard Population weights), together with the coverage of three different methods (normal approximation, Dobson, and Tiwari modified gamma) of estimating the 95% confidence intervals (CIs), were calculated.
Results
The normal approximation was, as expected, not suitable for use when fewer than 100 events occurred. The Tiwari method and the Dobson method of calculating confidence intervals produced similar estimates and either was suitable when the expected or observed numbers of events were 10 or greater. The accuracy of the CIs was not influenced by the distribution of the events across categories (i.e., the degree of clustering, the age distributions of the sampling populations, and the number of categories with no events occurring in them).
Conclusions
DSRs should not be given when the total observed number of events is less than 10. The Dobson method might be considered the preferred method due to the formulae being simpler than that of the Tiwari method and the coverage being slightly more accurate
Quantifying the Uncertainty in Optimal Experiment Schemes via Monte-Carlo Simulations
[[abstract]]In the process of designing life-testing experiments , experimenters always establish the optimal experiment scheme based on a particular parametric lifetime model. In most applications, the true lifetime model is unknown and need to be specified for the determination of optimal experiment schemes. Misspecification of the lifetime model may lead to a substantial loss of efficiency in the statistical analysis. Moreover, the determination of the optimal experiment scheme is always relying on asymptotic statistical theory. Therefore, the optimal experiment scheme may not be optimal for finite sample cases. This chapter aims to provide a general framework to quantify the sensitivity and uncertainty of the optimal experiment scheme due to misspecification of the lifetime model. For the illustration of the methodology developed here, analytical and Monte-Carlo methods are employed to evaluate the robustness of the optimal experiment scheme for progressive Type-II censored experiment under the location-scale family of distributions.[[notice]]補ćŁĺ®Ś
Point and Interval Estimation of Weibull Parameters Based on Joint Progressively Censored Data
Ordering properties of the smallest order statistics from generalized Birnbaum–Saunders models with associated random shocks
Expectation–maximization algorithm for system-based lifetime data with unknown system structure
Confidence intervals for ratio of two Poisson rates using the method of variance estimates recovery
Maximum Product Spacing Estimation of Weibull Distribution Under Adaptive Type-II Progressive Censoring Schemes
Asymptotic properties of maximum likelihood estimators based on progressive Type-II censoring
Asymptotic theory, Consistency, Maximum likelihood, Progressive Type-II censoring, Missing information principle,