264 research outputs found

    Bayesian semiparametric inference for multivariate doubly-interval-censored data

    Get PDF
    Based on a data set obtained in a dental longitudinal study, conducted in Flanders (Belgium), the joint time to caries distribution of permanent first molars was modeled as a function of covariates. This involves an analysis of multivariate continuous doubly-interval-censored data since: (i) the emergence time of a tooth and the time it experiences caries were recorded yearly, and (ii) events on teeth of the same child are dependent. To model the joint distribution of the emergence times and the times to caries, we propose a dependent Bayesian semiparametric model. A major feature of the proposed approach is that survival curves can be estimated without imposing assumptions such as proportional hazards, additive hazards, proportional odds or accelerated failure time.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS368 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Recent progresses in outcome-dependent sampling with failure time data

    Get PDF
    An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design

    Conditional transformation models

    Get PDF

    Geoadditive hazard regression for interval censored survival times

    Get PDF
    The Cox proportional hazards model is the most commonly used method when analyzing the impact of covariates on continuous survival times. In its classical form, the Cox model was introduced in the setting of right-censored observations. However, in practice other sampling schemes are frequently encountered and therefore extensions allowing for interval and left censoring or left truncation are clearly desired. Furthermore, many applications require a more flexible modeling of covariate information than the usual linear predictor. For example, effects of continuous covariates are likely to be of nonlinear form or spatial information is to be included appropriately. Further extensions should allow for time-varying effects of covariates or covariates that are themselves time-varying. Such models relax the assumption of proportional hazards. We propose a regression model for the hazard rate that combines and extends the above-mentioned features on the basis of a unifying Bayesian model formulation. Nonlinear and time-varying effects as well as the baseline hazard rate are modeled by penalized splines. Spatial effects can be included based on either Markov random fields or stationary Gaussian random fields. The model allows for arbitrary combinations of left, right and interval censoring as well as left truncation. Estimation is based on a reparameterisation of the model as a variance components mixed model. The variance parameters corresponding to inverse smoothing parameters can then be estimated based on an approximate marginal likelihood approach. As an application we present an analysis on childhood mortality in Nigeria, where the interval censoring framework also allows to deal with the problem of heaped survival times caused by memory effects. In a simulation study we investigate the effect of ignoring the impact of interval censored observations

    General Semiparametric Shared Frailty Model Estimation and Simulation with frailtySurv

    Get PDF
    The R package frailtySurv for simulating and fitting semi-parametric shared frailty models is introduced. Package frailtySurv implements semi-parametric consistent estimators for a variety of frailty distributions, including gamma, log-normal, inverse Gaussian and power variance function, and provides consistent estimators of the standard errors of the parameters' estimators. The parameters' estimators are asymptotically normally distributed, and therefore statistical inference based on the results of this package, such as hypothesis testing and confidence intervals, can be performed using the normal distribution. Extensive simulations demonstrate the flexibility and correct implementation of the estimator. Two case studies performed with publicly available datasets demonstrate applicability of the package. In the Diabetic Retinopathy Study, the onset of blindness is clustered by patient, and in a large hard drive failure dataset, failure times are thought to be clustered by the hard drive manufacturer and model

    Methods for Clustered Competing Risks Data and Causal Inference using Instrumental Variables for Censored Time-to-event Data

    Full text link
    In this dissertation, we propose new methods for analysis of clustered competing risks data (Chapters 1 and 2) and for instrumental variable (IV) analysis of univariate censored time-to-event data and competing risks data (Chapters 3 and 4). In Chapter 1, we propose estimating center effects through cause-specific proportional hazards frailty models that allow correlation among a center’s cause-specific effects. To evaluate center performance, we propose a directly standardized excess cumulative incidence (ECI) measure. We apply our methods to evaluate Organ Procurement Organizations with respect to (i) receipt of a kidney transplant and (ii) death on the wait-list. In Chapter 2, we propose to model the effects of cluster and individual-level covariates directly on the cumulative incidence functions of each risk through a semiparametric mixture component model with cluster-specific random effects. Our model permits joint inference on all competing events and provides estimates of the effects of clustering. We apply our method to multicenter competing risks data. In Chapter 3, we turn our focus to causal inference in the censored time-to-event setting in the presence of unmeasured confounders. We develop weighted IV estimators of the complier average causal effect on the restricted mean survival time. Our method accommodates instrument-outcome confounding and covariate dependent censoring. We establish the asymptotic properties, derive easily implementable variance estimators, and apply our method to compare modalities for end stage renal disease (ESRD) patients using national registry data. In Chapter 4, we develop IV analysis methods for competing risks data. Our method permits simultaneous inference of exposure effects on the absolute risk of all competing events and accommodates exposure dependent censoring. We apply the methods to compare dialytic modalities for ESRD patients with respect to risk of death from (i) cardiovascular diseases and (ii) other causes.PHDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144110/1/shdharma_1.pd

    Nagging: A scalable, fault-tolerant, paradigm for distributed search

    Get PDF
    This paper describes Nagging, a technique for parallelizing search in a heterogeneous distributed computing environment. Nagging exploits the speedup anomaly often observed when parallelizing problems by playing multiple reformulations of the problem or portions of the problem against each other. Nagging is both fault tolerant and robust to long message latencies. In this paper, we show how nagging can be used to parallelize several different algorithms drawn from the artificial intelligence literature, and describe how nagging can be combined with partitioning, the more traditional search parallelization strategy. We present a theoretical analysis of the advantage of nagging with respect to partitioning, and give empirical results obtained on a cluster of 64 processors that demonstrate nagging\u27s effectiveness and scalability as applied to A* search, alphabetaalpha beta minimax game tree search, and the Davis-Putnam algorithm

    Conditional transformation models

    Get PDF

    Multivariate Probit Models for Interval-Censored Failure Time Data

    Get PDF
    Survival analysis is an important branch of statistics that analyzes the time to event data. The events of interest can be death, disease occurrence, the failure of a machine part, etc.. One important feature of this type of data is censoring: information on time to event is not observed exactly due to loss to follow-up or non-occurrence of interested event before the trial ends. Censored data are commonly observed in clinical trials and epidemiological studies, since monitoring a person’s health over time after treatment is often required in medical or health studies. In this dissertation we focus on studying multivariate interval-censored data, a special type of survival data. By saying multivariate interval-censored data, we mean that there are multiple failure time events of interest, and these failure times are known only to lie within certain intervals instead of being observed exactly. These events of interest can be associated because of sharing some common characteristics. Multivariate interval-censored data draw more and more attention in epidemiological, social-behavioral and medical studies, in which subjects are examined multiple times and several events of interest are tested at the observation times. There are some existing methods available in literatures for analyzing multivariate interval-censored failure time data. Various models were developed for regression analysis. However, due to the complicated correlation structure between events, analyzing such type of survival data is much more difficult and new efficient methodologies are needed. Chapter 1 of this dissertation illustrates the important concepts of interval-censored data with several real data examples. A literature review of existing regression models and approaches is included as well. Chapter 2 introduces a new normal-frailty multivariate probit model for regression analysis of interval-censored failure time data and proposes an efficient Bayesian approach to get parameter estimates. Simulations and an analysis on a real data set are conducted to evaluate and illustrate the performance of this new method. This new approach is proved efficient and has accurate estimations on both the regression parameters and the baseline survival function. Several appealing properties of the model are discussed here. Chapter 3 proposes a more general multivariate probit model for multivariate interval-censored data. This new model allows arbitrary correlation among the correlated survival times. A new Gibbs sampler is proposed for the joint estimation of the regression parameters, the baseline CDF, and the correlation parameters. Chapter 4 extends the normal frailty multivariate probit model to allow arbitrary pairwise correlations. Simulation studies are conducted to explore the underlying relationship between the normal frailty multivariate probit model and the general multivariate probit model
    • …
    corecore