22 research outputs found

    Dynamic Methods for the Prediction of Survival Outcomes using Longitudinal Biomarkers

    Full text link
    In medical research, predicting the probability of a time-to-event outcome is often of interest. Along with failure time data, we may longitudinally observe disease markers that can influence survival. These time-dependent covariates provide additional information that can improve the predictive capability of survival models. It is desirable to use a patient's changing marker information to produce updated survival predictions at future time points, which can in turn direct individualized care decisions. In this dissertation, we develop methods that incorporate time-dependent marker information collected during follow-up with the aim of dynamic prediction and inference. In Chapter I, we compare two methods of dynamic prediction with a longitudinal binary marker, represented by an illness-death model. Joint modeling is a unified, principled approach that produces consistent predictions over time; however, it requires restrictive distributional assumptions and can involve computationally intensive estimation. Landmarking fits a Cox model at a sequence of prediction, or "landmark", times and is easily implemented, but does not produce a valid prediction function. We explore the theoretical justification and predictive capabilities of these methods, and propose extensions within the landmark framework to provide a better approximation to the true joint model. In Chapter II, we present an approximate approach for obtaining dynamic predictions that combines the advantages of joint modeling and landmarking. We specify the marginal marker and failure time distributions conditional on surviving up to a prediction time, and use a Gaussian copula to link them over time with an association function. We use a single model for the time-to-event outcome from which the conditional survival is derived, achieving a greater level of consistency than landmarking. Estimation is conducted using a two-stage approach that reduces the computational burden associated with joint modeling. In Chapter III, we introduce a model that incorporates the effects of a partially observed marker on failure time. We consider the marker to represent an underlying stochastic risk process that accumulates over time until a failure is experienced. We model this increasing risk as a Levy bridge process that has a multiplicative effect on the cumulative hazard. Using the mathematically tractable properties of the gamma process, we derive the marginal and conditional survival functions, and demonstrate estimation when the process is observed at the survival time. This approach can be extended to multiple measurement times, and applied to a variety of markers and disease settings where the correct marker distribution is not known or difficult to specify.PHDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147583/1/ksuresh_1.pd

    Comparison of joint modeling and landmarking for dynamic prediction under an illness‐death model

    Full text link
    Dynamic prediction incorporates time‐dependent marker information accrued during follow‐up to improve personalized survival prediction probabilities. At any follow‐up, or “landmark”, time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness‐death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time‐dependent covariate for predicting death following radiation therapy.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/140034/1/bimj1778.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/140034/2/bimj1778_am.pd

    A prediction model for colon cancer surveillance data

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/112258/1/sim6500-sup-0001-Supplementary1.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/112258/2/sim6500.pd

    31st Annual Meeting and Associated Programs of the Society for Immunotherapy of Cancer (SITC 2016) : part two

    Get PDF
    Background The immunological escape of tumors represents one of the main ob- stacles to the treatment of malignancies. The blockade of PD-1 or CTLA-4 receptors represented a milestone in the history of immunotherapy. However, immune checkpoint inhibitors seem to be effective in specific cohorts of patients. It has been proposed that their efficacy relies on the presence of an immunological response. Thus, we hypothesized that disruption of the PD-L1/PD-1 axis would synergize with our oncolytic vaccine platform PeptiCRAd. Methods We used murine B16OVA in vivo tumor models and flow cytometry analysis to investigate the immunological background. Results First, we found that high-burden B16OVA tumors were refractory to combination immunotherapy. However, with a more aggressive schedule, tumors with a lower burden were more susceptible to the combination of PeptiCRAd and PD-L1 blockade. The therapy signifi- cantly increased the median survival of mice (Fig. 7). Interestingly, the reduced growth of contralaterally injected B16F10 cells sug- gested the presence of a long lasting immunological memory also against non-targeted antigens. Concerning the functional state of tumor infiltrating lymphocytes (TILs), we found that all the immune therapies would enhance the percentage of activated (PD-1pos TIM- 3neg) T lymphocytes and reduce the amount of exhausted (PD-1pos TIM-3pos) cells compared to placebo. As expected, we found that PeptiCRAd monotherapy could increase the number of antigen spe- cific CD8+ T cells compared to other treatments. However, only the combination with PD-L1 blockade could significantly increase the ra- tio between activated and exhausted pentamer positive cells (p= 0.0058), suggesting that by disrupting the PD-1/PD-L1 axis we could decrease the amount of dysfunctional antigen specific T cells. We ob- served that the anatomical location deeply influenced the state of CD4+ and CD8+ T lymphocytes. In fact, TIM-3 expression was in- creased by 2 fold on TILs compared to splenic and lymphoid T cells. In the CD8+ compartment, the expression of PD-1 on the surface seemed to be restricted to the tumor micro-environment, while CD4 + T cells had a high expression of PD-1 also in lymphoid organs. Interestingly, we found that the levels of PD-1 were significantly higher on CD8+ T cells than on CD4+ T cells into the tumor micro- environment (p < 0.0001). Conclusions In conclusion, we demonstrated that the efficacy of immune check- point inhibitors might be strongly enhanced by their combination with cancer vaccines. PeptiCRAd was able to increase the number of antigen-specific T cells and PD-L1 blockade prevented their exhaus- tion, resulting in long-lasting immunological memory and increased median survival

    R37 REDS Pilot and Scale Development

    No full text
    The purpose of this project is to develop a questionnaire measure of reactance, self-exemption, disbelief and source derogation (REDS). This was preliminary work for research conducted under NCI R37CA25492

    A copula‐based approach for dynamic prediction of survival with a binary time‐dependent covariate

    Full text link
    Dynamic prediction methods incorporate longitudinal biomarker information to produce updated, more accurate predictions of conditional survival probability. There are two approaches for obtaining dynamic predictions: (1) a joint model of the longitudinal marker and survival process, and (2) an approximate approach that specifies a model for a specific component of the joint distribution. In the case of a binary marker, an illness‐death model is an example of a joint modeling approach that is unified and produces consistent predictions. However, previous literature has shown that approximate approaches, such as landmarking, with additional flexibility can have good predictive performance. One such approach proposes using a Gaussian copula to model the joint distribution of conditional continuous marker and survival distributions. It has the advantage of specifying established, flexible models for the marginals for which goodness‐of‐fit can be assessed, and has easy estimation that can be implemented in standard software. In this article, we provide a Gaussian copula approach for dynamic prediction to accommodate a binary marker using a continuous latent variable formulation. We compare the predictive performance of this approach to joint modeling and landmarking using simulations and demonstrate its use for obtaining dynamic predictions in an application to a prostate cancer study.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/170255/1/sim9102_am.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/170255/2/sim9102-sup-0001-Supinfo.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/170255/3/sim9102.pd

    The new injury severity score underestimates true injury severity in a resource-constrained setting

    No full text
    Background: The new injury severity score (NISS) is widely used within trauma outcomes research. NISS is a composite anatomic severity score derived from the Abbreviated Injury Scale (AIS) protocol. It has been postulated that NISS underestimates trauma severity in resource-constrained settings, which may contribute to erroneous research conclusions. We formally compare NISS to an expert panel's assessment of injury severity in South Africa. Methods: This was a retrospective chart review of adult trauma patients seen in a tertiary trauma center. Randomly selected medical records were reviewed by an AIS-certified rater who assigned an AIS severity score for each anatomic injury. A panel of five South African trauma experts independently reviewed the same charts and assigned consensus severity scores using a similar scale for comparability. NISS was calculated as the sum of the squares of the three highest assigned severity scores per patient. The difference in average NISS between rater and expert panel was assessed using a multivariable linear mixed effects regression adjusted for patient demographics, injury mechanism and type. Results: Of 49 patients with 190 anatomic injuries, the majority were male (n = 38), the average age was 36 (range 18–80), with either a penetrating (n = 23) or blunt (n = 26) injury, resulting in 4 deaths. Mean NISS was 16 (SD 15) for the AIS rater compared to 28 (SD 20) for the expert panel. Adjusted for potential confounders, AIS rater NISS was on average 11 points (95 % CI: 7, 15) lower than the expert panel NISS (p < 0.001). Injury type was an effect modifier, with the difference between the AIS rater and expert panel being greater in penetrating versus blunt injury (16 vs. 7; p = 0.04). Crush injury was not well-captured by AIS protocol. Conclusion: NISS may under-estimate the ‘true’ injury severity in a middle-income country trauma hospital, particularly for patients with penetrating injury

    A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features

    No full text
    Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as “black boxes”. Prediction models that provide no insight into how their predictions are obtained are difficult to trust for making important clinical decisions, such as medical diagnoses or treatment. Explainable machine learning (XML) methods, such as Shapley values, have made it possible to explain the behavior of ML algorithms and to identify which predictors contribute most to a prediction. Incorporating XML methods into medical software tools has the potential to increase trust in ML-powered predictions and aid physicians in making medical decisions. Specifically, in the field of medical imaging analysis the most used methods for explaining deep learning-based model predictions are saliency maps that highlight important areas of an image. However, they do not provide a straightforward interpretation of which qualities of an image area are important. Here, we describe a novel pipeline for XML imaging that uses radiomics data and Shapley values as tools to explain outcome predictions from complex prediction models built with medical imaging with well-defined predictors. We present a visualization of XML imaging results in a clinician-focused dashboard that can be generalized to various settings. We demonstrate the use of this workflow for developing and explaining a prediction model using MRI data from glioma patients to predict a genetic mutation
    corecore