137 research outputs found

    Asymptotics for non-parametric likelihood estimation with doubly censored multivariate failure times

    Get PDF
    AbstractThis paper considers non-parametric estimation of a multivariate failure time distribution function when only doubly censored data are available, which occurs in many situations such as epidemiological studies. In these situations, each of multivariate failure times of interest is defined as the elapsed time between an initial event and a subsequent event and the observations on both events can suffer censoring. As a consequence, the estimation of multivariate distribution is much more complicated than that for multivariate right- or interval-censored failure time data both theoretically and practically. For the problem, although several procedures have been proposed, they are only ad-hoc approaches as the asymptotic properties of the resulting estimates are basically unknown. We investigate both the consistency and the convergence rate of a commonly used non-parametric estimate and show that as the dimension of multivariate failure time increases or the number of censoring intervals of multivariate failure time decreases, the convergence rate for non-parametric estimate decreases, and is slower than that with multivariate singly right-censored or interval-censored data

    Pairwise Comparison Estimation of Censored Transformation Models

    Get PDF
    In this paper a pairwise comparison estimation procedure is proposed for the regression coefficients in a censored transformation model. The main advantage of the new estimator is that it can accommodate covariate dependent censoring without the requirement of smoothing parameters, trimming procedures, or stringent tail behavior restrictions. We also modify the pairwise estimator for other variations of the transformation model and propose estimators for the transformation function itself, as well as regression coefficients in heteroskedastic and panel data models. The estimators are shown to converge at the parametric (root-nn) rate, and the results of a small scale simulation study indicate they perform well in finite samples. We illustrate our estimator using the Stanford Heart Transplant data and marriage length data from the CPS fertility supplement.Transformation Models, Pairwise Comparison, Maximum Rank Correlation, Duration Analysis

    Empirical Efficiency Maximization

    Get PDF
    It has long been recognized that covariate adjustment can increase precision, even when it is not strictly necessary. The phenomenon is particularly emphasized in clinical trials, whether using continuous, categorical, or censored time-to-event outcomes. Adjustment is often straightforward when a discrete covariate partitions the sample into a handful of strata, but becomes more involved when modern studies collect copious amounts of baseline information on each subject. The dilemma helped motivate locally efficient estimation for coarsened data structures, as surveyed in the books of van der Laan and Robins (2003) and Tsiatis (2006). Here one fits a relatively small working model for the full data distribution, often with maximum likelihood, giving a nuisance parameter fit in an estimating equation for the parameter of interest. The usual advertisement is that the estimator is asymptotically efficient if the working model is correct, but otherwise is still consistent and asymptotically Normal. However, the working model will almost always be misspecified in practice. By applying standard likelihood based fits, one can poorly estimate the parameter of interest. We propose a new method, empirical efficiency maximization, to target the element of a working model minimizing asymptotic variance for the resulting parameter estimate, whether or not the working model is correctly specified. Our procedure is illustrated in three examples. It is shown to be a potentially major improvement over existing covariate adjustment methods for estimating disease prevalence in two-phase epidemiological studies, treatment effects in two-arm randomized trials, and marginal survival curves. Numerical asymptotic efficiency calculations demonstrate gains relative to standard locally efficient estimators

    A cox proportional hazard model for mid-point imputed interval censored data

    Get PDF
    There has been an increasing interest in survival analysis with interval-censored data, where the event of interest (such as infection with a disease) is not observed exactly but only known to happen between two examination times. However, because so much research has been focused on right-censored data, so many statistical tests and techniques are available for right-censoring methods, hence interval-censoring methods are not as abundant as those for right-censored data. In this study, right-censoring methods are used to fit a proportional hazards model to some interval-censored data. Transformation of the interval-censored observations was done using a method called mid-point imputation, a method which assumes that an event occurs at some midpoint of its recorded interval. Results obtained gave conservative regression estimates but a comparison with the conventional methods showed that the estimates were not significantly different. However, the censoring mechanism and interval lengths should be given serious consideration before deciding on using mid-point imputation on interval-censored data

    Penalized log-likelihood estimation for partly linear transformation models with current status data

    Full text link
    We consider partly linear transformation models applied to current status data. The unknown quantities are the transformation function, a linear regression parameter and a nonparametric regression effect. It is shown that the penalized MLE for the regression parameter is asymptotically normal and efficient and converges at the parametric rate, although the penalized MLE for the transformation function and nonparametric regression effect are only n1/3n^{1/3} consistent. Inference for the regression parameter based on a block jackknife is investigated. We also study computational issues and demonstrate the proposed methodology with a simulation study. The transformation models and partly linear regression terms, coupled with new estimation and inference techniques, provide flexible alternatives to the Cox model for current status data analysis.Comment: Published at http://dx.doi.org/10.1214/009053605000000444 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Locally Efficient Estimation with Bivariate Right Censored Data

    Get PDF
    Estimation for bivariate right censored data is a problem that has had much study over the past 15 years. In this paper we propose a new class of estimators for the bivariate survivor function based on locally efficient estimation. The locally efficient estimator takes bivariate estimators Fn and Gn of the distributions of the time variables T1,T2 and the censoring variables C1,C2, respectively, and maps them to the resulting estimator. If Fn and Gn are consistent estimators of F and G, respectively, then the resulting estimator will be nonparametrically efficient (thus the term ``locally efficient\u27\u27). However, if either Fn or Gn (but not both) is not a consistent estimator of F or G, respectively, then the estimator will still be consistent and asymptotically normally distributed. We propose a locally efficient estimator which uses a consistent, non-parametric estimator for G and allows the user to supply lower dimensional (semi-parameteric or parametric) model for F. Since the estimator we choose for G will be a consistent estimator of G, the resulting locally efficient estimator will always be consistent and asymptotically normal, and our simulation studies have indicated that using a lower dimensional model for F gives excellent small sample performance. In addition, our algorithm for calculation of the efficient influence curve at true distributions for F and G yields also the efficiency bound which can be used to calculate relative efficiencies for any bivariate estimator. In this paper we will introduce the locally efficient estimator for bivariate right censored data, present an asymptotic theorem, present the results of simulation studies and perform a brief data analysis illustrating the use of the locally efficient estimator

    Asymptotics for a Class of Dynamic Recurrent Event Models

    Full text link
    Asymptotic properties, both consistency and weak convergence, of estimators arising in a general class of dynamic recurrent event models are presented. The class of models take into account the impact of interventions after each event occurrence, the impact of accumulating event occurrences, the induced informative and dependent right-censoring mechanism due to the data-accrual scheme, and the effect of covariate processes on the recurrent event occurrences. The class of models subsumes as special cases many of the recurrent event models that have been considered in biostatistics, reliability, and in the social sciences. The asymptotic properties presented have the potential of being useful in developing goodness-of-fit and model validation procedures, confidence intervals and confidence bands constructions, and hypothesis testing procedures for the finite- and infinite-dimensional parameters of a general class of dynamic recurrent event models, albeit the models without frailties.Comment: 26 page

    Bayesian Regularisation in Structured Additive Regression Models for Survival Data

    Get PDF
    During recent years, penalized likelihood approaches have attracted a lot of interest both in the area of semiparametric regression and for the regularization of high-dimensional regression models. In this paper, we introduce a Bayesian formulation that allows to combine both aspects into a joint regression model with a focus on hazard regression for survival times. While Bayesian penalized splines form the basis for estimating nonparametric and flexible time-varying effects, regularization of high-dimensional covariate vectors is based on scale mixture of normals priors. This class of priors allows to keep a (conditional) Gaussian prior for regression coefficients on the predictor stage of the model but introduces suitable mixture distributions for the Gaussian variance to achieve regularization. This scale mixture property allows to device general and adaptive Markov chain Monte Carlo simulation algorithms for fitting a variety of hazard regression models. In particular, unifying algorithms based on iteratively weighted least squares proposals can be employed both for regularization and penalized semiparametric function estimation. Since sampling based estimates do no longer have the variable selection property well-known for the Lasso in frequentist analyses, we additionally consider spike and slab priors that introduce a further mixing stage that allows to separate between influential and redundant parameters. We demonstrate the different shrinkage properties with three simulation settings and apply the methods to the PBC Liver dataset

    Semiparametric Regression During 2003ā€“2007

    Get PDF
    Semiparametric regression is a fusion between parametric regression and nonparametric regression and the title of a book that we published on the topic in early 2003. We review developments in the field during the five year period since the book was written. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application
    • ā€¦
    corecore