244 research outputs found

    Regression Modeling for Recurrent Events Possibly with an Informative Terminal Event Using R Package reReg

    Get PDF
    Recurrent event analyses have found a wide range of applications in biomedicine, public health, and engineering, among others, where study subjects may experience a sequence of event of interest during follow-up. The R package reReg offers a comprehensive collection of practical and easy-to-use tools for regression analysis of recurrent events, possibly with the presence of an informative terminal event. The regression framework is a general scalechange model which encompasses the popular Cox-type model, the accelerated rate model, and the accelerated mean model as special cases. Informative censoring is accommodated through a subject-specific frailty without any need for parametric specification. Different regression models are allowed for the recurrent event process and the terminal event. Also included are visualization and simulation tools

    Improved Dynamic Predictions from Joint Models of Longitudinal and Survival Data with Time-Varying Effects using P-splines

    Get PDF
    In the field of cardio-thoracic surgery, valve function is monitored over time after surgery. The motivation for our research comes from a study which includes patients who received a human tissue valve in the aortic position. These patients are followed prospectively over time by standardized echocardiographic assessment of valve function. Loss of follow-up could be caused by valve intervention or the death of the patient. One of the main characteristics of the human valve is that its durability is limited. Therefore, it is of interest to obtain a prognostic model in order for the physicians to scan trends in valve function over time and plan their next intervention, accounting for the characteristics of the data. Several authors have focused on deriving predictions under the standard joint modeling of longitudinal and survival data framework that assumes a constant effect for the coefficient that links the longitudinal and survival outcomes. However, in our case this may be a restrictive assumption. Since the valve degenerates, the association between the biomarker with survival may change over time. To improve dynamic predictions we propose a Bayesian joint model that allows a time-varying coefficient to link the longitudinal and the survival processes, using P-splines. We evaluate the performance of the model in terms of discrimination and calibration, while accounting for censoring

    survHE: Survival Analysis for Health Economic Evaluation and Cost-Effectiveness Modeling

    Get PDF
    Survival analysis features heavily as an important part of health economic evaluation, an increasingly important component of medical research. In this setting, it is important to estimate the mean time to the survival endpoint using limited information (typically from randomized trials) and thus it is useful to consider parametric survival models. In this paper, we review the features of the R package survHE, specifically designed to wrap several tools to perform survival analysis for economic evaluation. In particular, survHE embeds both a standard, frequentist analysis (through the R package flexsurv) and a Bayesian approach, based on Hamiltonian Monte Carlo (via the R package rstan) or integrated nested Laplace approximation (with the R package INLA). Using this composite approach, we obtain maximum flexibility and are able to pre-compile a wide range of parametric models, with a view of simplifying the modelers' work and allowing them to move away from non-optimal work flows, including spreadsheets (e.g., Microsoft Excel)

    SmoothHazard:An R package for fitting regression models to interval-censored observations of illness-death models

    Get PDF
    The irreversible illness-death model describes the pathway from an initial state to an absorbing state either directly or through an intermediate state. This model is frequently used in medical applications where the intermediate state represents illness and the absorbing state represents death. In many studies, disease onset times are not known exactly. This happens for example if the disease status of a patient can only be assessed at follow-up visits. In this situation the disease onset times are interval-censored. This article presents the SmoothHazard package for R. It implements algorithms for simultaneously fitting regression models to the three transition intensities of an illness-death model where the transition times to the intermediate state may be interval-censored and all the event times can be right-censored. The package parses the individual data structure of the subjects in a data set to find the individual contributions to the likelihood. The three baseline transition intensity functions are modelled by Weibull distributions or alternatively by M -splines in a semi-parametric approach. For a given set of covariates, the estimated transition intensities can be combined into predictions of cumulative event probabilities and life expectancies

    Methods for Population Adjustment with Limited Access to Individual Patient Data: A Review and Simulation Study

    Get PDF
    Population-adjusted indirect comparisons estimate treatment effects when access to individual patient data is limited and there are cross-trial differences in effect modifiers. Popular methods include matching-adjusted indirect comparison (MAIC) and simulated treatment comparison (STC). There is limited formal evaluation of these methods and whether they can be used to accurately compare treatments. Thus, we undertake a comprehensive simulation study to compare standard unadjusted indirect comparisons, MAIC and STC across 162 scenarios. This simulation study assumes that the trials are investigating survival outcomes and measure continuous covariates, with the log hazard ratio as the measure of effect. MAIC yields unbiased treatment effect estimates under no failures of assumptions. The typical usage of STC produces bias because it targets a conditional treatment effect where the target estimand should be a marginal treatment effect. The incompatibility of estimates in the indirect comparison leads to bias as the measure of effect is non-collapsible. Standard indirect comparisons are systematically biased, particularly under stronger covariate imbalance and interaction effects. Standard errors and coverage rates are often valid in MAIC but the robust sandwich variance estimator underestimates variability where effective sample sizes are small. Interval estimates for the standard indirect comparison are too narrow and STC suffers from bias-induced undercoverage. MAIC provides the most accurate estimates and, with lower degrees of covariate overlap, its bias reduction outweighs the loss in effective sample size and precision under no failures of assumptions. An important future objective is the development of an alternative formulation to STC that targets a marginal treatment effect.Comment: 73 pages (34 are supplementary appendices and references), 8 figures, 2 tables. Full article (following Round 4 of minor revisions). arXiv admin note: text overlap with arXiv:2008.0595

    A theoretical and methodological framework for machine learning in survival analysis: Enabling transparent and accessible predictive modelling on right-censored time-to-event data

    Get PDF
    Survival analysis is an important field of Statistics concerned with mak- ing time-to-event predictions with ‘censored’ data. Machine learning, specifically supervised learning, is the field of Statistics concerned with using state-of-the-art algorithms in order to make predictions on unseen data. This thesis looks at unifying these two fields as current research into the two is still disjoint, with ‘classical survival’ on one side and su- pervised learning (primarily classification and regression) on the other. This PhD aims to improve the quality of machine learning research in survival analysis by focusing on transparency, accessibility, and predic- tive performance in model building and evaluation. This is achieved by examining historic and current proposals and implementations for models and measures (both classical and machine learning) in survival analysis and making novel contributions. In particular this includes: i) a survey of survival models including a crit- ical and technical survey of almost all supervised learning model classes currently utilised in survival, as well as novel adaptations; ii) a survey of evaluation measures for survival models, including key definitions, proofs and theorems for survival scoring rules that had previously been missing from the literature; iii) introduction and formalisation of composition and reduction in survival analysis, with a view on increasing transparency of modelling strategies and improving predictive performance; iv) imple- mentation of several R software packages, in particular mlr3proba for machine learning in survival analysis; and v) the first large-scale bench- mark experiment on right-censored time-to-event data with 24 survival models and 66 datasets. Survival analysis has many important applications in medical statistics, engineering and finance, and as such requires the same level of rigour as other machine learning fields such as regression and classification; this thesis aims to make this clear by describing a framework from prediction and evaluation to implementation
    • …
    corecore