29 research outputs found
Using predictions from a joint model for longitudinal and survival data to inform the optimal time of intervention in an abdominal aortic aneurysm screening programme.
Joint models of longitudinal and survival data can be used to predict the risk of a future event occurring based on the evolution of an endogenous biomarker measured repeatedly over time. This has led naturally to the use of dynamic predictions that update each time a new longitudinal measurement is provided. In this paper, we show how such predictions can be utilised within a fuller decision modelling framework, in particular to allow planning of future interventions for patients under a 'watchful waiting' care pathway. Through the objective of maximising expected life-years, the predicted risks associated with not intervening (e.g. the occurrence of severe sequelae) are balanced against risks associated with the intervention (e.g. operative risks). Our example involves patients under surveillance in an abdominal aortic aneurysm screening programme where a joint longitudinal and survival model is used to associate longitudinal measurements of aortic diameter with the risk of aneurysm rupture. We illustrate how the decision to intervene, which is currently based on a diameter measurement greater than a certain threshold, could be made more personalised and dynamic through the application of a decision modelling approach
AplusB: A Web Application for Investigating A + B Designs for Phase I Cancer Clinical Trials
In phase I cancer clinical trials, the maximum tolerated dose of a new drug is often found by a dose-escalation method known as the A + B design. We have developed an interactive web application, AplusB, which computes and returns exact operating characteristics of A + B trial designs. The application has a graphical user interface (GUI), requires no programming knowledge and is free to access and use on any device that can open an internet browser. A customised report is available for download for each design that contains tabulated operating characteristics and informative plots, which can then be compared with other dose-escalation methods. We present a step-by-step guide on how to use this application and provide several illustrative examples of its capabilities
Biases incurred from non-random repeat testing of haemoglobin levels in blood donors. Selective testing and its implications
To help prevent anaemia, it is a requisite for blood donors to undergo a haemoglobin test to ensure levels are not too low before donation. It is therefore important to have an accurate testing device and strategy to ensure donors are not being inappropriately bled. A recent study in blood donors used a selective testing strategy where if a donor's haemoglobin level is below the level required for donation, then another reading is taken and if this occurs again, a third and final reading is used. This strategy can reduce the average number of readings required per donor compared to taking three measurements for all donors. However, the final decision‐making measurement will on average be higher than a single measurement. In this paper, a selective testing strategy is compared against other strategies. Individual‐level biases are derived for the selective strategy and are shown to depend on how close a donor's true haemoglobin level is to the donation threshold and the magnitude of error in the testing device. A simulation study was conducted using the distribution of haemoglobin levels from a large donor population to investigate the effects different strategies have on population performance. We consider scenarios based on varying the measurement device bias and error, including differential biases that depend on the underlying haemoglobin level. Discriminatory performance is shown to be affected when using the selective testing strategies, especially when measurement error is large and when differential bias is present in the device. We recommend that the average of a number of readings should be used in preference to selective testing strategies if multiple measurements are available
Toxicity-dependent feasibility bounds for the escalation with overdose control approach in phase I cancer trials
Phase I trials of anti-cancer therapies aim to identify a maximum tolerated dose (MTD), defined as the dose that causes unacceptable toxicity in a target proportion of patients. Both rule-based and model-based methods have been proposed for MTD recommendation. The escalation with overdose control (EWOC) approach is a model-based design where the dose assigned to the next patient is one that, given all available data, has a posterior probability of exceeding the MTD equal to a pre-specified value known as the feasibility bound. The aim is to conservatively dose-escalate and approach the MTD, avoiding severe overdosing early on in a trial. The EWOC approach has been applied in practice with the feasibility bound either fixed or varying throughout a trial, yet some of the methods may recommend incoherent dose-escalation, that is, an increase in dose after observing severe toxicity at the current dose. We present examples where varying feasibility bounds have been used in practice, and propose a toxicity-dependent feasibility bound approach that guarantees coherent dose-escalation and incorporates the desirable features of other EWOC approaches. We show via detailed simulation studies that the toxicity-dependent feasibility bound approach provides improved MTD recommendation properties to the original EWOC approach for both discrete and continuous doses across most dose-toxicity scenarios, with comparable performance to other approaches without recommending incoherent dose escalation
AplusB: A Web Application for Investigating A + B Designs for Phase I Cancer Clinical Trials
In phase I cancer clinical trials, the maximum tolerated dose of a new drug is often found by a dose-escalation method known as the A + B design. We have developed an interactive web application, AplusB, which computes and returns exact operating characteristics of A + B trial designs. The application has a graphical user interface (GUI), requires no programming knowledge and is free to access and use on any device that can open an internet browser. A customised report is available for download for each design that contains tabulated operating characteristics and informative plots, which can then be compared with other dose-escalation methods. We present a step-by-step guide on how to use this application and provide several illustrative examples of its capabilities
Misspecification of at-risk periods and distributional assumptions in estimating COPD exacerbation rates: The resultant bias in treatment effect estimation.
In trials comparing the rate of chronic obstructive pulmonary disease exacerbation between treatment arms, the rate is typically calculated on the basis of the whole of each patient's follow-up period. However, the true time a patient is at risk should exclude periods in which an exacerbation episode is occurring, because a patient cannot be at risk of another exacerbation episode until recovered. We used data from two chronic obstructive pulmonary disease randomized controlled trials and compared treatment effect estimates and confidence intervals when using two different definitions of the at-risk period. Using a simulation study we examined the bias in the estimated treatment effect and the coverage of the confidence interval, using these two definitions of the at-risk period. We investigated how the sample size required for a given power changes on the basis of the definition of at-risk period used. Our results showed that treatment efficacy is underestimated when the at-risk period does not take account of exacerbation duration, and the power to detect a statistically significant result is slightly diminished. Correspondingly, using the correct at-risk period, some modest savings in required sample size can be achieved. Using the proposed at-risk period that excludes recovery times requires formal definitions of the beginning and end of an exacerbation episode, and we recommend these be always predefined in a trial protocol
Patient characteristics of the full cohort (<i>W</i>≥0) and four sub-cohorts selected by a minimum lookback window requirement.
Patient characteristics of the full cohort (W≥0) and four sub-cohorts selected by a minimum lookback window requirement.</p
The use of repeated blood pressure measures for cardiovascular risk prediction: a comparison of statistical models in the ARIC study
Many prediction models have been developed for the risk assessment and the prevention of cardiovascular disease in primary care. Recent efforts have focused on improving the accuracy of these prediction models by adding novel biomarkers to a common set of baseline risk predictors. Few have considered incorporating repeated measures of the common risk predictors. Through application to the Atherosclerosis Risk in Communities study and simulations, we compare models that use simple summary measures of the repeat information on systolic blood pressure, such as (i) baseline only; (ii) last observation carried forward; and (iii) cumulative mean, against more complex methods that model the repeat information using (iv) ordinary regression calibration; (v) risk-set regression calibration; and (vi) joint longitudinal and survival models. In comparison with the baseline-only model, we observed modest improvements in discrimination and calibration using the cumulative mean of systolic blood pressure, but little further improvement from any of the complex methods. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd
Standardised mortality ratio (SMR) and 95% confidence interval by follow-up time-since-entry, in years.
Reference line of SMR = 1 in red.</p
Standardised mortality ratio (SMR) by age group, over follow-up period in years.
Split to show initial high mortality rate trend (5a) and lower mortality rate after year 2 (5b). Reference line of SMR = 1 in red.</p
