33 research outputs found

    Handling Covariates in the Design of Clinical Trials

    Full text link
    There has been a split in the statistics community about the need for taking covariates into account in the design phase of a clinical trial. There are many advocates of using stratification and covariate-adaptive randomization to promote balance on certain known covariates. However, balance does not always promote efficiency or ensure more patients are assigned to the better treatment. We describe these procedures, including model-based procedures, for incorporating covariates into the design of clinical trials, and give examples where balance, efficiency and ethical considerations may be in conflict. We advocate a new class of procedures, covariate-adjusted response-adaptive (CARA) randomization procedures that attempt to optimize both efficiency and ethical considerations, while maintaining randomization. We review all these procedures, present a few new simulation studies, and conclude with our philosophy.Comment: Published in at http://dx.doi.org/10.1214/08-STS269 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    RARtool: A MATLAB Software Package for Designing Response-Adaptive Randomized Clinical Trials with Time-to-Event Outcomes

    Get PDF
    Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool, a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature

    Investigating the value of glucodensity analysis of continuous glucose monitoring data in type 1 diabetes: an exploratory analysis

    Get PDF
    IntroductionContinuous glucose monitoring (CGM) devices capture longitudinal data on interstitial glucose levels and are increasingly used to show the dynamics of diabetes metabolism. Given the complexity of CGM data, it is crucial to extract important patterns hidden in these data through efficient visualization and statistical analysis techniques.MethodsIn this paper, we adopted the concept of glucodensity, and using a subset of data from an ongoing clinical trial in pediatric individuals and young adults with new-onset type 1 diabetes, we performed a cluster analysis of glucodensities. We assessed the differences among the identified clusters using analysis of variance (ANOVA) with respect to residual pancreatic beta-cell function and some standard CGM-derived parameters such as time in range, time above range, and time below range.ResultsDistinct CGM data patterns were identified using cluster analysis based on glucodensities. Statistically significant differences were shown among the clusters with respect to baseline levels of pancreatic beta-cell function surrogate (C-peptide) and with respect to time in range and time above range.DiscussionOur findings provide supportive evidence for the value of glucodensity in the analysis of CGM data. Some challenges in the modeling of CGM data include unbalanced data structure, missing observations, and many known and unknown confounders, which speaks to the importance of--and provides opportunities for--taking an approach integrating clinical, statistical, and data science expertise in the analysis of these data

    Description of the Method for Evaluating Digital Endpoints in Alzheimer Disease Study : Protocol for an Exploratory, Cross-sectional Study

    Get PDF
    ©Jelena Curcic, Vanessa Vallejo, Jennifer Sorinas, Oleksandr Sverdlov, Jens Praestgaard, Mateusz Piksa, Mark Deurinck, Gul Erdemli, Maximilian BĂŒgler, Ioannis Tarnanas, Nick Taptiklis, Francesca Cormack, Rebekka Anker, Fabien MassĂ©, William Souillard-Mandar, Nathan Intrator, Lior Molcho, Erica Madero, Nicholas Bott, Mieko Chambers, Josef Tamory, Matias Shulz, Gerardo Fernandez, William Simpson, Jessica Robin, JĂłn G SnĂŠdal, Jang-Ho Cha, Kristin Hannesdottir. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 10.08.2022.BACKGROUND: More sensitive and less burdensome efficacy end points are urgently needed to improve the effectiveness of clinical drug development for Alzheimer disease (AD). Although conventional end points lack sensitivity, digital technologies hold promise for amplifying the detection of treatment signals and capturing cognitive anomalies at earlier disease stages. Using digital technologies and combining several test modalities allow for the collection of richer information about cognitive and functional status, which is not ascertainable via conventional paper-and-pencil tests. OBJECTIVE: This study aimed to assess the psychometric properties, operational feasibility, and patient acceptance of 10 promising technologies that are to be used as efficacy end points to measure cognition in future clinical drug trials. METHODS: The Method for Evaluating Digital Endpoints in Alzheimer Disease study is an exploratory, cross-sectional, noninterventional study that will evaluate 10 digital technologies' ability to accurately classify participants into 4 cohorts according to the severity of cognitive impairment and dementia. Moreover, this study will assess the psychometric properties of each of the tested digital technologies, including the acceptable range to assess ceiling and floor effects, concurrent validity to correlate digital outcome measures to traditional paper-and-pencil tests in AD, reliability to compare test and retest, and responsiveness to evaluate the sensitivity to change in a mild cognitive challenge model. This study included 50 eligible male and female participants (aged between 60 and 80 years), of whom 13 (26%) were amyloid-negative, cognitively healthy participants (controls); 12 (24%) were amyloid-positive, cognitively healthy participants (presymptomatic); 13 (26%) had mild cognitive impairment (predementia); and 12 (24%) had mild AD (mild dementia). This study involved 4 in-clinic visits. During the initial visit, all participants completed all conventional paper-and-pencil assessments. During the following 3 visits, the participants underwent a series of novel digital assessments. RESULTS: Participant recruitment and data collection began in June 2020 and continued until June 2021. Hence, the data collection occurred during the COVID-19 pandemic (SARS-CoV-2 virus pandemic). Data were successfully collected from all digital technologies to evaluate statistical and operational performance and patient acceptance. This paper reports the baseline demographics and characteristics of the population studied as well as the study's progress during the pandemic. CONCLUSIONS: This study was designed to generate feasibility insights and validation data to help advance novel digital technologies in clinical drug development. The learnings from this study will help guide future methods for assessing novel digital technologies and inform clinical drug trials in early AD, aiming to enhance clinical end point strategies with digital technologies. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/35442.Peer reviewe

    Omecamtiv mecarbil in chronic heart failure with reduced ejection fraction, GALACTIC‐HF: baseline characteristics and comparison with contemporary clinical trials

    Get PDF
    Aims: The safety and efficacy of the novel selective cardiac myosin activator, omecamtiv mecarbil, in patients with heart failure with reduced ejection fraction (HFrEF) is tested in the Global Approach to Lowering Adverse Cardiac outcomes Through Improving Contractility in Heart Failure (GALACTIC‐HF) trial. Here we describe the baseline characteristics of participants in GALACTIC‐HF and how these compare with other contemporary trials. Methods and Results: Adults with established HFrEF, New York Heart Association functional class (NYHA) ≄ II, EF ≀35%, elevated natriuretic peptides and either current hospitalization for HF or history of hospitalization/ emergency department visit for HF within a year were randomized to either placebo or omecamtiv mecarbil (pharmacokinetic‐guided dosing: 25, 37.5 or 50 mg bid). 8256 patients [male (79%), non‐white (22%), mean age 65 years] were enrolled with a mean EF 27%, ischemic etiology in 54%, NYHA II 53% and III/IV 47%, and median NT‐proBNP 1971 pg/mL. HF therapies at baseline were among the most effectively employed in contemporary HF trials. GALACTIC‐HF randomized patients representative of recent HF registries and trials with substantial numbers of patients also having characteristics understudied in previous trials including more from North America (n = 1386), enrolled as inpatients (n = 2084), systolic blood pressure < 100 mmHg (n = 1127), estimated glomerular filtration rate < 30 mL/min/1.73 m2 (n = 528), and treated with sacubitril‐valsartan at baseline (n = 1594). Conclusions: GALACTIC‐HF enrolled a well‐treated, high‐risk population from both inpatient and outpatient settings, which will provide a definitive evaluation of the efficacy and safety of this novel therapy, as well as informing its potential future implementation

    Randomization in Clinical Trials: Can We Eliminate Bias?

    No full text
    Randomization plays a fundamental role in clinical trials. While many modern clinical trials employ restricted, stratified or covariate-adaptive randomization designs that pursue balance in treatment assignments and balance across important covariates, some clinical trials call for response-adaptive or covariate-adjusted response-adaptive (CARA) randomization designs to address multiple experimental objectives primarily related to statistical efficiency and ethical considerations. In this paper, we elicit key principles of the well-conducted randomized clinical trial and explore the role of randomization and other important design tools in achieving valid and credible results. We give special attention to response-adaptive and CARA randomization designs, which have a firm theoretical basis, but are more complex and more vulnerable to operational biases than traditional randomization designs. We conclude that modern advances in information technology, rigorous planning, and adherence to the key principles of the well-conducted clinical trial should enable successful implementation of response-adaptive and CARA randomization designs in the near future

    Novel Statistical Designs for Phase I/II and Phase II Clinical Trials With Dose-Finding Objectives

    No full text
    In modern drug development, there has been an increasing interest in adaptive clinical trials—research designs that allow judicious modification of certain aspects of an ongoing clinical trial based on prespecified criteria according to accumulating data to achieve predetermined experimental objectives. A particularly important application of adaptive designs is in phase I and II stages of drug development. Many novel adaptive designs have been proposed in the context of phase I oncology trials of cytotoxic agents where acceptable toxicity frequently translates into therapeutic response. However, an assessment of efficacy measurements based on biomarkers in early development is also very important. The current paper gives an overview of adaptive designs for early development studies that utilize efficacy measurements in design adaptation rules. These include seamless phase I/II designs, where efficacy and safety considerations are both incorporated in dose-finding objectives, and phase II dose-response studies, which typically aim at establishing a dose-response relationship with respect to some efficacy outcome and at identifying the most promising doses to be tested in subsequent confirmatory trials. The authors discuss statistical, logistical, and regulatory aspects of these designs and provide perspectives on their applications in modern clinical trials

    Exact Bayesian Inference Comparing Binomial Proportions, With Application to Proof-of-Concept Clinical Trials.

    No full text
    The authors revisit the problem of exact Bayesian inference comparing two independent binomial proportions. Numerical integration in R is used to compute exact posterior distribution functions, probability densities, and quantiles of the risk difference, relative risk, and odds ratio. An application of the methodology is given in the context of randomized comparative proof-of-concept clinical trials that are driven by evaluation of quantitative criteria combining statistical significance and clinical relevance. A two-stage adaptive design based on predictive probability of success is proposed and its operating characteristics are studied via Monte Carlo simulation. The authors conclude that exact Bayesian methods provide an elegant and efficient way to facilitate design and analysis of proof-of-concept studies

    Balancing the Objectives of Statistical Efficiency and Allocation Randomness in Randomized Controlled Trials

    No full text
    Various restricted randomization procedures are available to achieve equal (1:1) allocation in a randomized clinical trial. However, for some procedures, there is a nonnegligible probability of imbalance in the final numbers which may result in an underpowered study. It is important to assess such probability at the study planning stage and make adjustments in the design if needed. In this paper, we perform a quantitative assessment of the tradeoff between randomness, balance, and power of restricted randomization designs targeting equal allocation. First, we study the small-sample performance of biased coin designs with known asymptotic properties and identify a design with an excellent balance–randomness tradeoff. Second, we investigate the issue of randomization-induced treatment imbalance and the corresponding risk of an underpowered study. We propose two risk mitigation strategies: increasing the total sample size or fine-tuning the biased coin parameter to obtain the least restrictive randomization procedure that attains the target power with a high, user-defined probability for the given sample size. Additionally, we investigate an approach for finding the most balanced design that satisfies a constraint on the chosen measure of randomness. Our proposed methodology is simple and yet generalizable to more complex settings, such as trials with stratified randomization and multi-arm trials with possibly unequal randomization ratios.</p
    corecore