501 research outputs found

    Asymptotic results for maximum likelihood estimators in joint analysis of repeated measurements and survival time

    Get PDF
    Maximum likelihood estimation has been extensively used in the joint analysis of repeated measurements and survival time. However, there is a lack of theoretical justification of the asymptotic properties for the maximum likelihood estimators. This paper intends to fill this gap. Specifically, we prove the consistency of the maximum likelihood estimators and derive their asymptotic distributions. The maximum likelihood estimators are shown to be semiparametrically efficient.Comment: Published at http://dx.doi.org/10.1214/009053605000000480 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Hazard models with varying coefficients for multivariate failure time data

    Get PDF
    Statistical estimation and inference for marginal hazard models with varying coefficients for multivariate failure time data are important subjects in survival analysis. A local pseudo-partial likelihood procedure is proposed for estimating the unknown coefficient functions. A weighted average estimator is also proposed in an attempt to improve the efficiency of the estimator. The consistency and asymptotic normality of the proposed estimators are established and standard error formulas for the estimated coefficients are derived and empirically tested. To reduce the computational burden of the maximum local pseudo-partial likelihood estimator, a simple and useful one-step estimator is proposed. Statistical properties of the one-step estimator are established and simulation studies are conducted to compare the performance of the one-step estimator to that of the maximum local pseudo-partial likelihood estimator. The results show that the one-step estimator can save computational cost without compromising performance both asymptotically and empirically and that an optimal weighted average estimator is more efficient than the maximum local pseudo-partial likelihood estimator. A data set from the Busselton Population Health Surveys is analyzed to illustrate our proposed methodology.Comment: Published at http://dx.doi.org/10.1214/009053606000001145 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Tuning Parameter Selection in Cox Proportional Hazards Model with a Diverging Number of Parameters

    Get PDF
    Regularized variable selection is a powerful tool for identifying the true regression model from a large number of candidates by applying penalties to the objective functions. The penalty functions typically involve a tuning parameter that control the complexity of the selected model. The ability of the regularized variable selection methods to identify the true model critically depends on the correct choice of the tuning parameter. In this study we develop a consistent tuning parameter selection method for regularized Cox\u27s proportional hazards model with a diverging number of parameters. The tuning parameter is selected by minimizing the generalized information criterion. We prove that, for any penalty that possesses the oracle property, the proposed tuning parameter selection method identifies the true model with probability approaching one as sample size increases. Its finite sample performance is evaluated by simulations. Its practical use is demonstrated in the Cancer Genome Atlas (TCGA) breast cancer data

    Variable Selection for Case-Cohort Studies with Failure Time Outcome

    Get PDF
    Case-cohort designs are widely used in large cohort studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large, so an efficient variable selection method is necessary. In this paper, we study the properties of variable selection using the smoothly clipped absolute deviation penalty in a case-cohort design with a diverging number of parameters. We establish the consistency and asymptotic normality of the maximum penalized pseudo-partial likelihood estimator, and show that the proposed variable selection procedure is consistent and has an asymptotic oracle property. Simulation studies compare the finite sample performance of the procedure with Akaike information criterion- and Bayesian information criterion-based tuning parameter selection methods. We make recommendations for use of the procedures in case-cohort studies, and apply them to the Busselton Health Study

    Improving the efficiency of estimation in the additive hazards model for stratified case-cohort design with multiple diseases

    Get PDF
    The case-cohort study design has often been used in studies of a rare disease or for a common disease with some biospecimens needing to be preserved for future studies. A case-cohort study design consists of a random sample, called the subcohort, and all or a portion of the subjects with the disease of interest. One advantage of the case-cohort design is that the same subcohort can be used for studying multiple diseases. Stratified random sampling is often used for the subcohort. Additive hazards models are often preferred in studies where the risk difference, instead of relative risk, is of main interest. Existing methods do not use the available covariate information fully. We propose a more efficient estimator by making full use of available covariate information for the additive hazards model with data from a stratified case-cohort design with rare (the traditional situation) and non-rare (the generalized situation) diseases. We propose an estimating equation approach with a new weight function. The proposed estimators are shown to be consistent and asymptotically normally distributed. Simulation studies show that the proposed method using all available information leads to efficiency gain and stratification of the subcohort improves efficiency when the strata are highly correlated with the covariates. Our proposed method is applied to data from the Atherosclerosis Risk in Communities (ARIC) study

    Quantile regression models for current status data

    Get PDF
    Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging

    Case-Cohort Analysis with Accelerated Failure Time Model

    Get PDF
    In a case–cohort design, covariates are assembled only for a subcohort that is randomly selected from the entire cohort and any additional cases outside the subcohort. This design is appealing for large cohort studies of rare disease, especially when the exposures of interest are expensive to ascertain for all the subjects. We propose statistical methods for analyzing the case–cohort data with a semiparametric accelerated failure time model that interprets the covariates effects as to accelerate or decelerate the time to failure. Asymptotic properties of the proposed estimators are developed. The finite sample properties of case–cohort estimator and its relative efficiency to full cohort estimator are assessed via simulation studies. A real example from a study of cardiovascular disease is provided to illustrate the estimating procedure

    Additive transformation models for clustered failure time data

    Get PDF
    We propose a class of additive transformation risk models for clustered failure time data. Our models are motivated by the usual additive risk model for independent failure times incorporating a frailty with mean one and constant variability which is a natural generalization of the additive risk model from univariate failure time to multivariate failure time. An estimating equation approach based on the marginal hazards function is proposed. Under the assumption that cluster sizes are completely random, we show the resulting estimators of the regression coefficients are consistent and asymptotically normal. We also provide goodness-of-fit test statistics for choosing the transformation. Simulation studies and real data analysis are conducted to examine the finite-sample performance of our estimators

    Additive Mixed Effect Model for Clustered Failure Time Data

    Get PDF
    We propose an additive mixed effect model to analyze clustered failure time data. The proposed model assumes an additive structure and include a random effect as an additional component. Our model imitates the commonly used mixed effect models in repeated measurement analysis but under the context of hazards regression; our model can also be considered as a parallel development of the gamma-frailty model in additive model structures. We develop estimating equations for parameter estimation and propose a way of assessing the distribution of the latent random effect in the presence of large clusters. We establish the asymptotic properties of the proposed estimator. The small sample performance of our method is demonstrated via a large number of simulation studies. Finally, we apply the proposed model to analyze data from a diabetic study and a treatment trial for congestive heart failure

    Joint covariate-adjusted score test statistics for recurrent events and a terminal event

    Get PDF
    Recurrent events data are frequently encountered and could be stopped by a terminal event in clinical trials. It is of interest to assess the treatment efficacy simultaneously with respect to both the recurrent events and the terminal event in many applications. In this paper we propose joint covariate-adjusted score test statistics based on joint models of recurrent events and a terminal event. No assumptions on the functional form of the covariates are needed. Simulation results show that the proposed tests can improve the efficiency over tests based on covariate unadjusted model. The proposed tests are applied to the SOLVD data for illustration
    • …
    corecore