270 research outputs found

    Geoadditive hazard regression for interval censored survival times

    Get PDF
    The Cox proportional hazards model is the most commonly used method when analyzing the impact of covariates on continuous survival times. In its classical form, the Cox model was introduced in the setting of right-censored observations. However, in practice other sampling schemes are frequently encountered and therefore extensions allowing for interval and left censoring or left truncation are clearly desired. Furthermore, many applications require a more flexible modeling of covariate information than the usual linear predictor. For example, effects of continuous covariates are likely to be of nonlinear form or spatial information is to be included appropriately. Further extensions should allow for time-varying effects of covariates or covariates that are themselves time-varying. Such models relax the assumption of proportional hazards. We propose a regression model for the hazard rate that combines and extends the above-mentioned features on the basis of a unifying Bayesian model formulation. Nonlinear and time-varying effects as well as the baseline hazard rate are modeled by penalized splines. Spatial effects can be included based on either Markov random fields or stationary Gaussian random fields. The model allows for arbitrary combinations of left, right and interval censoring as well as left truncation. Estimation is based on a reparameterisation of the model as a variance components mixed model. The variance parameters corresponding to inverse smoothing parameters can then be estimated based on an approximate marginal likelihood approach. As an application we present an analysis on childhood mortality in Nigeria, where the interval censoring framework also allows to deal with the problem of heaped survival times caused by memory effects. In a simulation study we investigate the effect of ignoring the impact of interval censored observations

    Conditional transformation models

    Get PDF

    General Semiparametric Shared Frailty Model Estimation and Simulation with frailtySurv

    Get PDF
    The R package frailtySurv for simulating and fitting semi-parametric shared frailty models is introduced. Package frailtySurv implements semi-parametric consistent estimators for a variety of frailty distributions, including gamma, log-normal, inverse Gaussian and power variance function, and provides consistent estimators of the standard errors of the parameters' estimators. The parameters' estimators are asymptotically normally distributed, and therefore statistical inference based on the results of this package, such as hypothesis testing and confidence intervals, can be performed using the normal distribution. Extensive simulations demonstrate the flexibility and correct implementation of the estimator. Two case studies performed with publicly available datasets demonstrate applicability of the package. In the Diabetic Retinopathy Study, the onset of blindness is clustered by patient, and in a large hard drive failure dataset, failure times are thought to be clustered by the hard drive manufacturer and model

    A semiparametric Bayesian proportional hazards model for interval censored data with frailty effects

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Multivariate analysis of interval censored event data based on classical likelihood methods is notoriously cumbersome. Likelihood inference for models which additionally include random effects are not available at all. Developed algorithms bear problems for practical users like: matrix inversion, slow convergence, no assessment of statistical uncertainty.</p> <p>Methods</p> <p>MCMC procedures combined with imputation are used to implement hierarchical models for interval censored data within a Bayesian framework.</p> <p>Results</p> <p>Two examples from clinical practice demonstrate the handling of clustered interval censored event times as well as multilayer random effects for inter-institutional quality assessment. The software developed is called survBayes and is freely available at CRAN.</p> <p>Conclusion</p> <p>The proposed software supports the solution of complex analyses in many fields of clinical epidemiology as well as health services research.</p

    Methods for Clustered Competing Risks Data and Causal Inference using Instrumental Variables for Censored Time-to-event Data

    Full text link
    In this dissertation, we propose new methods for analysis of clustered competing risks data (Chapters 1 and 2) and for instrumental variable (IV) analysis of univariate censored time-to-event data and competing risks data (Chapters 3 and 4). In Chapter 1, we propose estimating center effects through cause-specific proportional hazards frailty models that allow correlation among a center’s cause-specific effects. To evaluate center performance, we propose a directly standardized excess cumulative incidence (ECI) measure. We apply our methods to evaluate Organ Procurement Organizations with respect to (i) receipt of a kidney transplant and (ii) death on the wait-list. In Chapter 2, we propose to model the effects of cluster and individual-level covariates directly on the cumulative incidence functions of each risk through a semiparametric mixture component model with cluster-specific random effects. Our model permits joint inference on all competing events and provides estimates of the effects of clustering. We apply our method to multicenter competing risks data. In Chapter 3, we turn our focus to causal inference in the censored time-to-event setting in the presence of unmeasured confounders. We develop weighted IV estimators of the complier average causal effect on the restricted mean survival time. Our method accommodates instrument-outcome confounding and covariate dependent censoring. We establish the asymptotic properties, derive easily implementable variance estimators, and apply our method to compare modalities for end stage renal disease (ESRD) patients using national registry data. In Chapter 4, we develop IV analysis methods for competing risks data. Our method permits simultaneous inference of exposure effects on the absolute risk of all competing events and accommodates exposure dependent censoring. We apply the methods to compare dialytic modalities for ESRD patients with respect to risk of death from (i) cardiovascular diseases and (ii) other causes.PHDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144110/1/shdharma_1.pd

    Non-compliance and missing data in health economic evaluation

    Full text link
    Health economic evaluations face the issues of non-compliance and missing data. Here, non-compliance is defined as non-adherence to a specific treatment, and occurs within randomised controlled trials (RCTs) when participants depart from their random assignment. Missing data arises if, for example, there is loss to follow-up, survey non-response, or the information available from routine data sources is incomplete. Appropriate statistical methods for handling non-compliance and missing data have been developed, but they have rarely been applied in health economics studies. Here, we illustrate the issues and outline some of the appropriate methods to handle these with an application to a health economic evaluation that uses data from an RCT. In an RCT the random assignment can be used as an instrument for treatment receipt, to obtain consistent estimates of the complier average causal effect, provided the underlying assumptions are met. Instrumental variable methods can accommodate essential features of the health economic context such as the correlation between individuals' costs and outcomes in cost-effectiveness studies. Methodological guidance for handling missing data encourages approaches such as multiple imputation or inverse probability weighting, that assume the data are Missing At Random, but also sensitivity analyses that recognise the data may be missing according to the true, unobserved values, that is, Missing Not at Random. Future studies should subject the assumptions behind methods for handling non-compliance and missing data to thorough sensitivity analyses. Modern machine learning methods can help reduce reliance on correct model specification. Further research is required to develop flexible methods for handling more complex forms of non-compliance and missing data.Comment: 41 page

    Posterior Inference in Bayesian Quantile Regression with Asymmetric Laplace Likelihood

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/135059/1/insr12114.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/135059/2/insr12114_am.pd
    • …
    corecore