48 research outputs found

    Review and Comparison of Computational Approaches for Joint Longitudinal and Time‐to‐Event Models

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/151312/1/insr12322.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/151312/2/insr12322_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/151312/3/Supplement_ReviewComputationalJointModels_final.pd

    Joint modeling of longitudinal outcomes and survival using latent growth modeling approach in a mesothelioma trial

    Get PDF
    Joint modeling of longitudinal and survival data can provide more efficient and less biased estimates of treatment effects through accounting for the associations between these two data types. Sponsors of oncology clinical trials routinely and increasingly include patient-reported outcome (PRO) instruments to evaluate the effect of treatment on symptoms, functioning, and quality of life. Known publications of these trials typically do not include jointly modeled analyses and results. We formulated several joint models based on a latent growth model for longitudinal PRO data and a Cox proportional hazards model for survival data. The longitudinal and survival components were linked through either a latent growth trajectory or shared random effects. We applied these models to data from a randomized phase III oncology clinical trial in mesothelioma. We compared the results derived under different model specifications and showed that the use of joint modeling may result in improved estimates of the overall treatment effect

    Semiparametric theory and empirical processes in causal inference

    Full text link
    In this paper we review important aspects of semiparametric theory and empirical processes that arise in causal inference problems. We begin with a brief introduction to the general problem of causal inference, and go on to discuss estimation and inference for causal effects under semiparametric models, which allow parts of the data-generating process to be unrestricted if they are not of particular interest (i.e., nuisance functions). These models are very useful in causal problems because the outcome process is often complex and difficult to model, and there may only be information available about the treatment process (at best). Semiparametric theory gives a framework for benchmarking efficiency and constructing estimators in such settings. In the second part of the paper we discuss empirical process theory, which provides powerful tools for understanding the asymptotic behavior of semiparametric estimators that depend on flexible nonparametric estimators of nuisance functions. These tools are crucial for incorporating machine learning and other modern methods into causal inference analyses. We conclude by examining related extensions and future directions for work in semiparametric causal inference

    Estimation After a Group Sequential Trial

    No full text
    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al. (Statistical Methods in Medical Research, 2012) and Milanzi et al. (Properties of estimators in exponential family settings with observation-based stopping rules, 2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2n. In this paper, we consider the more practically useful setting of sample sizes in a the finite set {n(1), n(2) , . . . , nL}. It is shown that the sample average is then a justifiable estimator, in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator (CLE) provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections
    corecore