3,902 research outputs found
Modern Statistical Models and Methods for Estimating Fatigue-Life and Fatigue-Strength Distributions from Experimental Data
Engineers and scientists have been collecting and analyzing fatigue data
since the 1800s to ensure the reliability of life-critical structures.
Applications include (but are not limited to) bridges, building structures,
aircraft and spacecraft components, ships, ground-based vehicles, and medical
devices. Engineers need to estimate S-N relationships (Stress or Strain versus
Number of cycles to failure), typically with a focus on estimating small
quantiles of the fatigue-life distribution. Estimates from this kind of model
are used as input to models (e.g., cumulative damage models) that predict
failure-time distributions under varying stress patterns. Also, design
engineers need to estimate lower-tail quantiles of the closely related
fatigue-strength distribution. The history of applying incorrect statistical
methods is nearly as long and such practices continue to the present. Examples
include treating the applied stress (or strain) as the response and the number
of cycles to failure as the explanatory variable in regression analyses
(because of the need to estimate strength distributions) and ignoring or
otherwise mishandling censored observations (known as runouts in the fatigue
literature). The first part of the paper reviews the traditional modeling
approach where a fatigue-life model is specified. We then show how this
specification induces a corresponding fatigue-strength model. The second part
of the paper presents a novel alternative modeling approach where a
fatigue-strength model is specified and a corresponding fatigue-life model is
induced. We explain and illustrate the important advantages of this new
modeling approach.Comment: 93 pages, 27 page
Order-statistics-based inferences for censored lifetime data and financial risk analysis
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This thesis focuses on applying order-statistics-based inferences on lifetime analysis and financial risk measurement. The first problem is raised from fitting the Weibull distribution to progressively censored and accelerated life-test data. A new orderstatistics- based inference is proposed for both parameter and con dence interval estimation. The second problem can be summarised as adopting the inference used in the first problem for fitting the generalised Pareto distribution, especially when sample size is small. With some modifications, the proposed inference is compared with classical methods and several relatively new methods emerged from recent literature. The third problem studies a distribution free approach for forecasting financial volatility, which is essentially the standard deviation of financial returns. Classical models of this approach use the interval between two symmetric extreme quantiles of the return distribution as a proxy of volatility. Two new models are proposed, which use intervals of expected shortfalls and expectiles, instead of interval of quantiles. Different models are compared with empirical stock indices data.
Finally, attentions are drawn towards the heteroskedasticity quantile regression. The
proposed joint modelling approach, which makes use of the parametric link between
the quantile regression and the asymmetric Laplace distribution, can provide estimations
of the regression quantile and of the log linear heteroskedastic scale simultaneously.
Furthermore, the use of the expectation of the check function as a measure of
quantile deviation is discussed
Nonparametric and semiparametric inference on quantile lost lifespan
A new summary measure for time-to-event data, termed lost lifespan, is proposed in which the existing concept of reversed percentile residual life, or percentile inactivity time, is recast to show that it can be used for routine analysis to summarize life lost. The lost lifespan infers the distribution of time lost due to experiencing an event of interest before some specified time point. An estimating equation approach is adopted to avoid estimation of the probability density function of the underlying time-to-event distribution to estimate the variance of the quantile estimator. A K-sample test statistic is proposed to test the ratio of quantile lost lifespans. Simulation studies are performed to assess finite properties of the proposed statistic in terms of coverage probability and power. The concept of life lost is then extended to a regression setting to analyze covariate effects on the quantiles of the distribution of the lost lifespan under right censoring. An estimating equation, variance estimator, and minimum dispersion statistic for testing the significance of regression parameters are proposed and evaluated via simulation studies. The proposed approach reveals several advantages over existing methods for analyzing time-to-event data, which is illustrated with a breast cancer dataset from a Phase III clinical trial conducted by the National Surgical Adjuvant Breast and Bowel Project.
Public Health Significance: The analysis of time-to-event data can provide important information about new treatments and therapies, particularly in clinical trial settings. The methods provided in this dissertation will allow public health researchers to analyze effectiveness of new treatments in terms of a new summary measure, life loss. In addition to providing statistical advantages over existing methods, analyzing time-to-event data in terms of the lost lifespan provides a more straightforward interpretation beneficial to clinicians, patients, and other stakeholders
- …