57 research outputs found
Recommended from our members
How can health economics be used in the design and analysis of adaptive clinical trials? A qualitative analysis
Introduction
Adaptive designs offer a flexible approach, allowing changes to a trial based on examinations of the data as it progresses. Adaptive clinical trials are becoming a popular choice, as the prudent use of finite research budgets and accurate decision-making are priorities for healthcare providers around the world. The methods of health economics, which aim to maximise the health gained for money spent, could be incorporated into the design and analysis of adaptive clinical trials to make them more efficient. We aimed to understand the perspectives of stakeholders in health technology assessments to inform recommendations for the use of health economics in adaptive clinical trials.
Methods
A qualitative study explored the attitudes of key stakeholders—including researchers, decision-makers and members of the public—towards the use of health economics in the design and analysis of adaptive clinical trials. Data were collected using interviews and focus groups (29 participants). A framework analysis was used to identify themes in the transcripts.
Results
It was considered that answering the clinical research question should be the priority in a clinical trial, notwithstanding the importance of cost-effectiveness for decision-making. Concerns raised by participants included handling the volatile nature of cost data at interim analyses; implementing this approach in global trials; resourcing adaptive trials which are designed and adapted based on health economic outcomes; and training stakeholders in these methods so that they can be implemented and appropriately interpreted.
Conclusion
The use of health economics in the design and analysis of adaptive clinical trials has the potential to increase the efficiency of health technology assessments worldwide. Recommendations are made concerning the development of methods allowing the use of health economics in adaptive clinical trials, and suggestions are given to facilitate their implementation in practice
A paradigm shift in cystic fibrosis nutritional care: clinicians' views on the management of patients with overweight and obesity
Background
Overweight and obesity among people with cystic fibrosis (pwCF) has become more prevalent since the widespread adoption of CF transmembrane conductance regulator (CFTR) modulator therapies and presents a new challenge for nutritional care. We aimed to explore how clinicians working in CF care approach the management of adults with overweight and obesity.
Methods
We conducted semi-structured interviews with n = 20 clinicians (n = 6 physiotherapists, n = 6 doctors and n = 8 dietitians) working in 15 adult CF centres in the United Kingdom. The interviews explored their perspectives and current practices caring for people with CF and overweight/obesity. Data were analysed using reflexive thematic analysis.
Results
Four main themes were identified: 1) challenges of raising the topic of overweight and obesity in the CF clinic (e.g., clinician-patient rapport and concerns around weight stigma); 2) the changing landscape of assessment due to CF-specific causes of weight gain: (e.g., impact of CFTR modulators and CF legacy diet) 3) presence of clinical equipoise for weight management due to the lack of CF-specific evidence on the consequences of obesity and intentional weight loss (e.g., unclear consequences on respiratory outcomes and risk of weight related co-morbidities) and 4) opportunities for a safe, effective, and acceptable weight management treatment for people with CF (e.g., working collaboratively with current multidisciplinary CF care).
Conclusions
Approaching weight management in the CF setting is complex. Trials are needed to assess the equipoise of weight management interventions in this group and CF-specific issues should be considered when developing such interventions
Point estimation for adaptive trial designs II: Practical considerations and guidance
In adaptive clinical trials, the conventional end-of-trial point estimate of a treatment effect is prone to bias, that is, a systematic tendency to deviate from its true value. As stated in recent FDA guidance on adaptive designs, it is desirable to report estimates of treatment effects that reduce or remove this bias. However, it may be unclear which of the available estimators are preferable, and their use remains rare in practice. This article is the second in a two-part series that studies the issue of bias in point estimation for adaptive trials. Part I provided a methodological review of approaches to remove or reduce the potential bias in point estimation for adaptive designs. In part II, we discuss how bias can affect standard estimators and assess the negative impact this can have. We review current practice for reporting point estimates and illustrate the computation of different estimators using a real adaptive trial example (including code), which we use as a basis for a simulation study. We show that while on average the values of these estimators can be similar, for a particular trial realization they can give noticeably different values for the estimated treatment effect. Finally, we propose guidelines for researchers around the choice of estimators and the reporting of estimates following an adaptive design. The issue of bias should be considered throughout the whole lifecycle of an adaptive design, with the estimation strategy prespecified in the statistical analysis plan. When available, unbiased or bias-reduced estimates are to be preferred
Recommended from our members
A review of clinical trials with an adaptive design and health economic analysis
An adaptive design uses data collected as a clinical trial progresses to inform modifications to
the trial. Hence, adaptive designs and health economics aim to facilitate efficient and accurate
decision-making. However, it is unclear whether the methods are considered together in the
design, analysis and reporting of trials. This review aims to establish how health economic
outcomes are utilised in the design, analysis and reporting of adaptive designs. Registered and published trials up to August 2016 with an adaptive design and health
economic analysis were identified. The use of health economics in the design, analysis and
reporting was assessed. Summary statistics are presented and recommendations formed based
on the research team’s experiences and a practical interpretation of the results.
Thirty-seven trials with an adaptive design and health economic analysis were identified. It
was not clear whether the health economic analysis accounted for the adaptive design in
17/37 trials where this was thought necessary, nor whether health economic outcomes were
utilised at the interim analysis for 18/19 of trials with results. The reporting of health
economic results was sub-optimal for the (17/19) trials with published results. Appropriate consideration is rarely given to the health economic analysis of adaptive designs.
Opportunities to utilise health economic outcomes in the design and analysis of adaptive
trials are being missed. Further work is needed to establish whether adaptive designs and
health economic analyses can be used together to increase the efficiency of health technology
assessments without compromising accuracy
Cost-Effectiveness of Robot-Assisted Radical Cystectomy vs Open Radical Cystectomy for Patients With Bladder Cancer
IMPORTANCE: The value to payers of robot-assisted radical cystectomy with intracorporeal urinary diversion (iRARC) when compared with open radical cystectomy (ORC) for patients with bladder cancer is unclear. OBJECTIVES: To compare the cost-effectiveness of iRARC with that of ORC. DESIGN, SETTING, AND PARTICIPANTS: This economic evaluation used individual patient data from a randomized clinical trial at 9 surgical centers in the United Kingdom. Patients with nonmetastatic bladder cancer were recruited from March 20, 2017, to January 29, 2020. The analysis used a health service perspective and a 90-day time horizon, with supplementary analyses exploring patient benefits up to 1 year. Deterministic and probabilistic sensitivity analyses were undertaken. Data were analyzed from January 13, 2022, to March 10, 2023. INTERVENTIONS: Patients were randomized to receive either iRARC (n = 169) or ORC (n = 169). MAIN OUTCOMES AND MEASURES: Costs of surgery were calculated using surgery timings and equipment costs, with other hospital data based on counts of activity. Quality-adjusted life-years were calculated from European Quality of Life 5-Dimension 5-Level instrument responses. Prespecified subgroup analyses were undertaken based on patient characteristics and type of diversion. RESULTS: A total of 305 patients with available outcome data were included in the analysis, with a mean (SD) age of 68.3 (8.1) years, and of whom 241 (79.0%) were men. Robot-assisted radical cystectomy was associated with statistically significant reductions in admissions to intensive therapy (6.35% [95% CI, 0.42%-12.28%]), and readmissions to hospital (14.56% [95% CI, 5.00%-24.11%]), but increases in theater time (31.35 [95% CI, 13.67-49.02] minutes). The additional cost of iRARC per patient was £1124 (95% CI, -£576 to £2824 [US 831 to 144 312) per quality-adjusted life-year gained. Robot-assisted radical cystectomy had a much higher probability of being cost-effective for subgroups defined by age, tumor stage, and performance status. CONCLUSIONS AND RELEVANCE: In this economic evaluation of surgery for patients with bladder cancer, iRARC reduced short-term morbidity and some associated costs. While the resulting cost-effectiveness ratio was in excess of thresholds used by many publicly funded health systems, patient subgroups were identified for which iRARC had a high probability of being cost-effective. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT03049410
Point estimation for adaptive trial designs
Recent FDA guidance on adaptive clinical trial designs defines bias as "a systematic tendency for the estimate of treatment effect to deviate from its true value", and states that it is desirable to obtain and report estimates of treatment effects that reduce or remove this bias. In many adaptive designs, the conventional end-of-trial point estimates of the treatment effects are prone to bias, because they do not take into account the potential and realised trial adaptations. While much of the methodological developments on adaptive designs have tended to focus on control of type I error rates and power considerations, in contrast the question of biased estimation has received less attention. This article addresses this issue by providing a comprehensive overview of proposed approaches to remove or reduce the potential bias in point estimation of treatment effects in an adaptive design, as well as illustrating how to implement them. We first discuss how bias can affect standard estimators and critically assess the negative impact this can have. We then describe and compare proposed unbiased and bias-adjusted estimators of treatment effects for different types of adaptive designs. Furthermore, we illustrate the computation of different estimators in practice using a real trial example. Finally, we propose a set of guidelines for researchers around the choice of estimators and the reporting of estimates following an adaptive design
Adaptive designs in clinical trials: why use them, and how to run and report them
Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial’s course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adaptive design has been misleadingly used as an all-encompassing label to refer to certain methods that could be deemed controversial or that have been inadequately implemented.
We believe that even if the planning and analysis of a trial is undertaken by an expert statistician, it is essential that the investigators understand the implications of using an adaptive design, for example, what the practical challenges are, what can (and cannot) be inferred from the results of such a trial, and how to report and communicate the results. This tutorial paper provides guidance on key aspects of adaptive designs that are relevant to clinical triallists. We explain the basic rationale behind adaptive designs, clarify ambiguous terminology and summarise the utility and pitfalls of adaptive designs. We discuss practical aspects around funding, ethical approval, treatment supply and communication with stakeholders and trial participants. Our focus, however, is on the interpretation and reporting of results from adaptive design trials, which we consider vital for anyone involved in medical research. We emphasise the general principles of transparency and reproducibility and suggest how best to put them into practice
Costs and staffing resource requirements for adaptive clinical trials: quantitative and qualitative results from the Costing Adaptive Trials project.
BACKGROUND: Adaptive designs offer great promise in improving the efficiency and patient-benefit of clinical trials. An important barrier to further increased use is a lack of understanding about which additional resources are required to conduct a high-quality adaptive clinical trial, compared to a traditional fixed design. The Costing Adaptive Trials (CAT) project investigated which additional resources may be required to support adaptive trials. METHODS: We conducted a mock costing exercise amongst seven Clinical Trials Units (CTUs) in the UK. Five scenarios were developed, derived from funded clinical trials, where a non-adaptive version and an adaptive version were described. Each scenario represented a different type of adaptive design. CTU staff were asked to provide the costs and staff time they estimated would be needed to support the trial, categorised into specified areas (e.g. statistics, data management, trial management). This was calculated separately for the non-adaptive and adaptive version of the trial, allowing paired comparisons. Interviews with 10 CTU staff who had completed the costing exercise were conducted by qualitative researchers to explore reasons for similarities and differences. RESULTS: Estimated resources associated with conducting an adaptive trial were always (moderately) higher than for the non-adaptive equivalent. The median increase was between 2 and 4% for all scenarios, except for sample size re-estimation which was 26.5% (as the adaptive design could lead to a lengthened study period). The highest increase was for statistical staff, with lower increases for data management and trial management staff. The percentage increase in resources varied across different CTUs. The interviews identified possible explanations for differences, including (1) experience in adaptive trials, (2) the complexity of the non-adaptive and adaptive design, and (3) the extent of non-trial specific core infrastructure funding the CTU had. CONCLUSIONS: This work sheds light on additional resources required to adequately support a high-quality adaptive trial. The percentage increase in costs for supporting an adaptive trial was generally modest and should not be a barrier to adaptive designs being cost-effective to use in practice. Informed by the results of this research, guidance for investigators and funders will be developed on appropriately resourcing adaptive trials
Adaptive designs in clinical trials: why use them, and how to run and report them.
Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial's course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adaptive design has been misleadingly used as an all-encompassing label to refer to certain methods that could be deemed controversial or that have been inadequately implemented.We believe that even if the planning and analysis of a trial is undertaken by an expert statistician, it is essential that the investigators understand the implications of using an adaptive design, for example, what the practical challenges are, what can (and cannot) be inferred from the results of such a trial, and how to report and communicate the results. This tutorial paper provides guidance on key aspects of adaptive designs that are relevant to clinical triallists. We explain the basic rationale behind adaptive designs, clarify ambiguous terminology and summarise the utility and pitfalls of adaptive designs. We discuss practical aspects around funding, ethical approval, treatment supply and communication with stakeholders and trial participants. Our focus, however, is on the interpretation and reporting of results from adaptive design trials, which we consider vital for anyone involved in medical research. We emphasise the general principles of transparency and reproducibility and suggest how best to put them into practice
- …