2,219 research outputs found

    Nonparametric Inferences for the Hazard Function with Right Truncation

    Get PDF
    Incompleteness is a major feature of time-to-event data. As one type of incompleteness, truncation refers to the unobservability of the time-to-event variable because it is smaller (or greater) than the truncation variable. A truncated sample always involves left and right truncation. Left truncation has been studied extensively while right truncation has not received the same level of attention. In one of the earliest studies on right truncation, Lagakos et al. (1988) proposed to transform a right truncated variable to a left truncated variable and then apply existing methods to the transformed variable. The reverse-time hazard function is introduced through transformation. However, this quantity does not have a natural interpretation. There exist gaps in the inferences for the regular forward-time hazard function with right truncated data. This dissertation discusses variance estimation of the cumulative hazard estimator, one-sample log-rank test, and comparison of hazard rate functions among finite independent samples under the context of right truncation. First, the relation between the reverse- and forward-time cumulative hazard functions is clarified. This relation leads to the nonparametric inference for the cumulative hazard function. Jiang (2010) recently conducted a research on this direction and proposed two variance estimators of the cumulative hazard estimator. Some revision to the variance estimators is suggested in this dissertation and evaluated in a Monte-Carlo study. Second, this dissertation studies the hypothesis testing for right truncated data. A series of tests is developed with the hazard rate function as the target quantity. A one-sample log-rank test is first discussed, followed by a family of weighted tests for comparison between finite KK-samples. Particular weight functions lead to log-rank, Gehan, Tarone-Ware tests and these three tests are evaluated in a Monte-Carlo study. Finally, this dissertation studies the nonparametric inference for the hazard rate function for the right truncated data. The kernel smoothing technique is utilized in estimating the hazard rate function. A Monte-Carlo study investigates the uniform kernel smoothed estimator and its variance estimator. The uniform, Epanechnikov and biweight kernel estimators are implemented in the example of blood transfusion infected AIDS data

    Semiparametric Multivariate Accelerated Failure Time Model with Generalized Estimating Equations

    Full text link
    The semiparametric accelerated failure time model is not as widely used as the Cox relative risk model mainly due to computational difficulties. Recent developments in least squares estimation and induced smoothing estimating equations provide promising tools to make the accelerate failure time models more attractive in practice. For semiparametric multivariate accelerated failure time models, we propose a generalized estimating equation approach to account for the multivariate dependence through working correlation structures. The marginal error distributions can be either identical as in sequential event settings or different as in parallel event settings. Some regression coefficients can be shared across margins as needed. The initial estimator is a rank-based estimator with Gehan's weight, but obtained from an induced smoothing approach with computation ease. The resulting estimator is consistent and asymptotically normal, with a variance estimated through a multiplier resampling method. In a simulation study, our estimator was up to three times as efficient as the initial estimator, especially with stronger multivariate dependence and heavier censoring percentage. Two real examples demonstrate the utility of the proposed method

    Empirical Likelihood Inferences in Survival Analysis

    Get PDF
    In survival analysis, different regression models are used to estimate the effects of covariates on the survival time. The proportional hazards model is commonly applied. However, the proportional hazards model does not always give good fit in the real life. Other models, such as proportional odds models, additive hazards models are useful alternative. Motivated by this limitation, we investigate empirical likelihood method and make inference for semiparametric transformation models and accelerated failure time models in this dissertation. The proposed empirical likelihood methods can solve several challenging and open problems. These interesting problems include semiparametirc transformation model with length-biased sampling, semiparametric analysis based on weighted estimating equations with missing covariates. In addition, a more computationally efficient method called jackknife empirical likelihood (JEL) is proposed, which can be applied to make statistical inference for the accelerated failure time model without computing the limiting variance. We show that under certain regularity conditions, the empirical log-likelihood ratio test statistic converges to a standard chi-squared distribution. Finally, computational algorithms are developed for utilizing the proposed empirical likelihood and jackknife empirical likelihood methods. Extensive simulation studies on coverage probabilities and average lengths of confidence intervals for the regression parameters for those topics indicate good finite samples performance under various settings. Furthermore, for each model, real data sets are analyzed for illustration of the proposed methods

    A Simple Test for the Absence of Covariate Dependence in Hazard Regression Models

    Get PDF
    This paper extends commonly used tests for equality of hazard rates in a two-sample or k-sample setup to a situation where the covariate under study is continuous. In other words, we test the hypothesis that the conditional hazard rate is the same for all covariate values, against the omnibus alternative as well as more specific alternatives, when the covariate is continuous. The tests developed are particularly useful for detecting trend in the underlying conditional hazard rates or changepoint trend alternatives. Asymptotic distribution of the test statistics are established and small sample properties of the tests are studied. An application to the e¤ect of aggregate Q on corporate failure in the UK shows evidence of trend in the covariate e¤ect, whereas a Cox regression model failed to detect evidence of any covariate effect. Finally, we discuss an important extension to testing for proportionality of hazards in the presence of individual level frailty with arbitrary distribution

    Capturing Uncertainty in Fatigue Life Data

    Get PDF
    Time-to-failure (TTF) data, also referred to as life data, are investigated across a wide range of scientific disciplines and collected mainly through scientific experiments with the main objective of predicting performance in service conditions. Fatigue life data are times, measured in cycles, until complete fracture of a material in response to a cyclical loading. Fatigue life data have large variation, which is often overlooked or not rigorously investigated when developing predictive life models. This research develops a statistical model to capture dispersion in fatigue life data which can be used to extend deterministic life models into probabilistic life models. Additionally, a predictive life model is developed using failure-time regression methods. The predictive life and dispersion models are investigated as dual-response using nonparametric methods. After model adequacy is examined, a Bayesian extension and other applications of this model are discussed

    Novel Statistical Methods for Censored Medical Cost and Breast Cancer Data

    Get PDF
    Recent studies show that appropriate statistical analysis of cost data may lead to more cost-effective medical treatments, resulting in substantial cost savings. Even though the mean value is publicly accepted as a summary of medical costs, however, due to heavy censoring and heavy skewness, mean will be affected much by missing or extremely large values. Therefore, quantiles of medical costs like the median cost are more reasonable summaries of the cost data. In the first part of this dissertation, we first propose to use empirical likelihood (EL) methods based on influence function and jackknife techniques to construct confidence regions for regression parameters in median cost regression models with censored data. We further propose EL-based confidence intervals for the median cost with given covariates. Compared with existing normal approximation-based confidence intervals, our proposed intervals have better coverage accuracy. In the real world, there is a large proportion of patients having zero costs. In the second part, we propose to use fiducial quantity and EL-based inference for the mean of zero-inflated censored medical costs applying the method of variance estimates recovery (MOVER). We also provide EL-based confidence intervals for the upper quantile censored medical costs with many zero observations. Simulation studies are conducted to compare the performance between proposed EL-based methods and the existing normal approximation-based methods in terms of coverage probability. The novel EL-based methods are observed to have better finite sample performances than existing methods, especially when the censoring proportion is high. In the third part of this dissertation, we focus on evaluating breast cancer recurrence risk. For early-stage cancer tumor recurrence study, existing methods do not have an overall powerful survival prediction ability. Preliminary studies show that centrosome amplification has a strong latent correlation with tumor progression. As a result, we propose to construct a novel quantitative centrosome amplification score to stratify patients\u27 cancer recurrence risk. We prove that patients with higher centrosome amplification score will have a significantly higher probability to experience cancer recurrence given all demographic conditions, which could provide a potent reference for the future developing trend of early-stage breast cancer

    A Simple Test for the Absence of Covariate Dependence in Hazard Regression Models

    Get PDF
    This paper extends commonly used tests for equality of hazard rates in a two-sample or k-sample setup to a situation where the covariate under study is continuous. In other words, we test the hypothesis that the conditional hazard rate is the same for all covariate values, against the omnibus alternative as well as more specific alternatives, when the covariate is continuous. The tests developed are particularly useful for detecting trend in the underlying conditional hazard rates or changepoint trend alternatives. Asymptotic distribution of the test statistics are established and small sample properties of the tests are studied. An application to the e¤ect of aggregate Q on corporate failure in the UK shows evidence of trend in the covariate e¤ect, whereas a Cox regression model failed to detect evidence of any covariate effect. Finally, we discuss an important extension to testing for proportionality of hazards in the presence of individual level frailty with arbitrary distribution.Covariate dependence; Continuous covariate; Two-sample tests; Trend tests; Proportional hazards; Frailty/ unobserved heterogeneity; Linear transformation model

    Application of Time-to-Event Methods in the Assessment of Safety in Clinical Trials

    Get PDF
    Since randomized controlled trials (RCT) are typically designed and powered for efficacy rather than safety, power is an important concern in the analysis of the effect of treatment on the occurrence of adverse events (AE). These outcomes are often time-to-event outcomes which will naturally be subject to right-censoring due to early patient withdrawals. In the analysis of the treatment effect on such an outcome, gains in efficiency, and thus power, can be achieved by exploiting covariate information. We apply the targeted maximum likelihood methodology to the estimation of treatment specific survival at a fixed end point for right-censored survival outcomes. This approach provides a method for covariate adjustment, that under no or uninformative censoring, does not require any additional parametric modeling assumptions, and, under informative censoring, is consistent under consistent estimation of the censoring mechanism or the conditional hazard for survival. Thus, the targeted maximum likelihood estimator has two important advantages over the Kaplan-Meier estimator: 1) It exploits covariates to improve efficiency, and 2) It is consistent in the presence of informative censoring. These properties are demonstrated through simulation studies. Extensions to the methodology are provided for non randomized post-market safety studies and also for the inclusion of time-dependent covariates

    Asymptotic properties of mean survival estimate based on the Kaplan–Meier curve with an extrapolated tail

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/90597/1/pst514.pd
    • …
    corecore