73,062 research outputs found

    Beyond first-order asymptotics for Cox regression

    Get PDF
    To go beyond standard first-order asymptotics for Cox regression, we develop parametric bootstrap and second-order methods. In general, computation of PP-values beyond first order requires more model specification than is required for the likelihood function. It is problematic to specify a censoring mechanism to be taken very seriously in detail, and it appears that conditioning on censoring is not a viable alternative to that. We circumvent this matter by employing a reference censoring model, matching the extent and timing of observed censoring. Our primary proposal is a parametric bootstrap method utilizing this reference censoring model to simulate inferential repetitions of the experiment. It is shown that the most important part of improvement on first-order methods - that pertaining to fitting nuisance parameters - is insensitive to the assumed censoring model. This is supported by numerical comparisons of our proposal to parametric bootstrap methods based on usual random censoring models, which are far more unattractive to implement. As an alternative to our primary proposal, we provide a second-order method requiring less computing effort while providing more insight into the nature of improvement on first-order methods. However, the parametric bootstrap method is more transparent, and hence is our primary proposal. Indications are that first-order partial likelihood methods are usually adequate in practice, so we are not advocating routine use of the proposed methods. It is however useful to see how best to check on first-order approximations, or improve on them, when this is expressly desired.Comment: Published at http://dx.doi.org/10.3150/13-BEJ572 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Semiparametric linear regression with censored data and stochastic regressors

    Get PDF
    We propose three new estimation procedures in the linear regression model with randomly-right censored data when the distribution function of the error term is unspecified, regressors are stochastic and the distribution function of the censoring variable is not necessarily the same for all observations ("unequal censoring"). The proposed procedures are derived combining techniques which produce accurate estimates with "equal censoring" with kernel-conditionalı Kaplan-Meier estimates. The performance of six estimation procedures (the three proposed methods and three alternative ones) is compared by means of some Monte Carlo experiments

    A Churn for the Better: Localizing Censorship using Network-level Path Churn and Network Tomography

    Get PDF
    Recent years have seen the Internet become a key vehicle for citizens around the globe to express political opinions and organize protests. This fact has not gone unnoticed, with countries around the world repurposing network management tools (e.g., URL filtering products) and protocols (e.g., BGP, DNS) for censorship. However, repurposing these products can have unintended international impact, which we refer to as "censorship leakage". While there have been anecdotal reports of censorship leakage, there has yet to be a systematic study of censorship leakage at a global scale. In this paper, we combine a global censorship measurement platform (ICLab) with a general-purpose technique -- boolean network tomography -- to identify which AS on a network path is performing censorship. At a high-level, our approach exploits BGP churn to narrow down the set of potential censoring ASes by over 95%. We exactly identify 65 censoring ASes and find that the anomalies introduced by 24 of the 65 censoring ASes have an impact on users located in regions outside the jurisdiction of the censoring AS, resulting in the leaking of regional censorship policies

    Building Prediction Models for Dementia: The Need to Account for Interval Censoring and the Competing Risk of Death

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Context. Prediction models for dementia are crucial for informing clinical decision making in older adults. Previous models have used genotype and age to obtain risk scores to determine risk of Alzheimer’s Disease, one of the most common forms of dementia (Desikan et al., 2017). However, previous prediction models do not account for the fact that the time to dementia onset is unknown, lying between the last negative and the first positive dementia diagnosis time (interval censoring). Instead, these models use time to diagnosis, which is greater than or equal to the true dementia onset time. Furthermore, these models do not account for the competing risk of death which is quite frequent among elder adults. Objectives. To develop a prediction model for dementia that accounts for interval censoring and the competing risk of death. To compare the predictions from this model with the predictions from a naïve analysis that ignores interval censoring and the competing risk of death. Methods. We apply the semiparametric sieve maximum likelihood (SML) approach to simultaneously model the cumulative incidence function (CIF) of dementia and death while accounting for interval censoring (Bakoyannis, Yu, & Yiannoutsos, 2017). The SML is implemented using the R package intccr. The CIF curves of dementia are compared for the SML and the naïve approach using a dataset from the Indianapolis Ibadan Dementia Project. Results. The CIF from the SML and the naïve approach illustrated that for healthier individuals at baseline, the naïve approach underestimated the incidence of dementia compared to the SML, as a result of interval censoring. Individuals with a poorer health condition at baseline have a CIF that appears to be overestimated in the naïve approach. This is due to older individuals with poor health conditions having an elevated risk of death. Conclusions. The SML method that accounts for the competing risk of death along with interval censoring should be used for fitting prediction/prognostic models of dementia to inform clinical decision making in older adults. Without controlling for the competing risk of death and interval censoring, the current models can provide invalid predictions of the CIF of dementia

    A semi-Markov model for stroke with piecewise-constant hazards in the presence of left, right and interval censoring.

    Get PDF
    This paper presents a parametric method of fitting semi-Markov models with piecewise-constant hazards in the presence of left, right and interval censoring. We investigate transition intensities in a three-state illness-death model with no recovery. We relax the Markov assumption by adjusting the intensity for the transition from state 2 (illness) to state 3 (death) for the time spent in state 2 through a time-varying covariate. This involves the exact time of the transition from state 1 (healthy) to state 2. When the data are subject to left or interval censoring, this time is unknown. In the estimation of the likelihood, we take into account interval censoring by integrating out all possible times for the transition from state 1 to state 2. For left censoring, we use an Expectation-Maximisation inspired algorithm. A simulation study reflects the performance of the method. The proposed combination of statistical procedures provides great flexibility. We illustrate the method in an application by using data on stroke onset for the older population from the UK Medical Research Council Cognitive Function and Ageing Study

    Using multiple classifiers for predicting the risk of endovascular aortic aneurysm repair re-intervention through hybrid feature selection.

    Get PDF
    Feature selection is essential in medical area; however, its process becomes complicated with the presence of censoring which is the unique character of survival analysis. Most survival feature selection methods are based on Cox's proportional hazard model, though machine learning classifiers are preferred. They are less employed in survival analysis due to censoring which prevents them from directly being used to survival data. Among the few work that employed machine learning classifiers, partial logistic artificial neural network with auto-relevance determination is a well-known method that deals with censoring and perform feature selection for survival data. However, it depends on data replication to handle censoring which leads to unbalanced and biased prediction results especially in highly censored data. Other methods cannot deal with high censoring. Therefore, in this article, a new hybrid feature selection method is proposed which presents a solution to high level censoring. It combines support vector machine, neural network, and K-nearest neighbor classifiers using simple majority voting and a new weighted majority voting method based on survival metric to construct a multiple classifier system. The new hybrid feature selection process uses multiple classifier system as a wrapper method and merges it with iterated feature ranking filter method to further reduce features. Two endovascular aortic repair datasets containing 91% censored patients collected from two centers were used to construct a multicenter study to evaluate the performance of the proposed approach. The results showed the proposed technique outperformed individual classifiers and variable selection methods based on Cox's model such as Akaike and Bayesian information criterions and least absolute shrinkage and selector operator in p values of the log-rank test, sensitivity, and concordance index. This indicates that the proposed classifier is more powerful in correctly predicting the risk of re-intervention enabling doctor in selecting patients' future follow-up plan

    Likelihood Estimation for Censored Random Vectors

    Get PDF
    This article shows how to construct a likelihood for a general class of censoring problems. This likelihood is proven to be valid, i.e. its maximiser is consistent and the respective root-n estimator is asymptotically efficient and normally distributed under regularity conditions. The method generalises ordinary maximum likelihood estimation as well as several standard estimators for censoring problems (e.g. tobit type I - tobit type V).Censored variables; Limited dependent variables; Multivariate methods; Random censoring; Likelihood
    • …
    corecore