159 research outputs found

    Generalized estimating equations to estimate the ordered stereotype logit model for panel data

    Get PDF
    By modeling the effects of predictor variables as a multiplicative function of regression parameters being invariant over categories, and category-specific scalar effects, the ordered stereotype logit model is a flexible regression model for ordinal response variables. In this article, we propose a generalized estimating equations (GEE) approach to estimate the ordered stereotype logit model for panel data based on working covariance matrices, which are not required to be correctly specified. A simulation study compares the performance of GEE estimators based on various working correlation matrices and working covariance matrices using local odds ratios. Estimation of the model is illustrated using a real-world dataset. The results from the simulation study suggest that GEE estimation of this model is feasible in medium-sized and large samples and that estimators based on local odds ratios as realized in this study tend to be less efficient compared with estimators based on a working correlation matrix. For low true correlations, the efficiency gains seem to be rather small and if the working covariance structure is too flexible, the corresponding estimator may even be less efficient compared with the GEE estimator assuming independence. Like for GEE estimators more generally, if the true correlations over time are high, then a working covariance structure which is close to the true structure can lead to considerable efficiency gains compared with assuming independence.Peer ReviewedPostprint (published version

    Investigation of one-stage meta-analysis methods for joint longitudinal and time-to-event data through simulation and real data application

    Get PDF
    Background: Joint modeling of longitudinal and time‐to‐event data is often advantageous over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The current literature on joint modeling focuses mainly on the analysis of single studies with a lack of methods available for the meta‐analysis of joint data from multiple studies. Methods: We investigate a variety of one‐stage methods for the meta‐analysis of joint longitudinal and time‐to‐event outcome data. These methods are applied to the INDANA dataset to investigate longitudinally measured systolic blood pressure, with each of time to death, time to myocardial infarction, and time to stroke. Results are compared to separate longitudinal or time‐to‐event meta‐analyses. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results: The performance of the examined one‐stage joint meta‐analytic models varied. Models that accounted for between study heterogeneity performed better than models that ignored it. Of the examined methods to account for between study heterogeneity, under the examined association structure, fixed effect approaches appeared preferable, whereas methods involving a baseline hazard stratified by study were least time intensive. Conclusions: One‐stage joint meta‐analytic models that accounted for between study heterogeneity using a mix of fixed effects or a stratified baseline hazard were reliable; however, models examined that included study level random effects in the association structure were less reliable

    Semiparametric Multivariate Accelerated Failure Time Model with Generalized Estimating Equations

    Full text link
    The semiparametric accelerated failure time model is not as widely used as the Cox relative risk model mainly due to computational difficulties. Recent developments in least squares estimation and induced smoothing estimating equations provide promising tools to make the accelerate failure time models more attractive in practice. For semiparametric multivariate accelerated failure time models, we propose a generalized estimating equation approach to account for the multivariate dependence through working correlation structures. The marginal error distributions can be either identical as in sequential event settings or different as in parallel event settings. Some regression coefficients can be shared across margins as needed. The initial estimator is a rank-based estimator with Gehan's weight, but obtained from an induced smoothing approach with computation ease. The resulting estimator is consistent and asymptotically normal, with a variance estimated through a multiplier resampling method. In a simulation study, our estimator was up to three times as efficient as the initial estimator, especially with stronger multivariate dependence and heavier censoring percentage. Two real examples demonstrate the utility of the proposed method

    Crude incidence in two-phase designs in the presence of competing risks.

    Get PDF
    BackgroundIn many studies, some information might not be available for the whole cohort, some covariates, or even the outcome, might be ascertained in selected subsamples. These studies are part of a broad category termed two-phase studies. Common examples include the nested case-control and the case-cohort designs. For two-phase studies, appropriate weighted survival estimates have been derived; however, no estimator of cumulative incidence accounting for competing events has been proposed. This is relevant in the presence of multiple types of events, where estimation of event type specific quantities are needed for evaluating outcome.MethodsWe develop a non parametric estimator of the cumulative incidence function of events accounting for possible competing events. It handles a general sampling design by weights derived from the sampling probabilities. The variance is derived from the influence function of the subdistribution hazard.ResultsThe proposed method shows good performance in simulations. It is applied to estimate the crude incidence of relapse in childhood acute lymphoblastic leukemia in groups defined by a genotype not available for everyone in a cohort of nearly 2000 patients, where death due to toxicity acted as a competing event. In a second example the aim was to estimate engagement in care of a cohort of HIV patients in resource limited setting, where for some patients the outcome itself was missing due to lost to follow-up. A sampling based approach was used to identify outcome in a subsample of lost patients and to obtain a valid estimate of connection to care.ConclusionsA valid estimator for cumulative incidence of events accounting for competing risks under a general sampling design from an infinite target population is derived

    Designs for clinical trials with time-to-event outcomes based on stopping guidelines for lack of benefit

    Get PDF
    <p>Abstract</p> <p>background</p> <p>The pace of novel medical treatments and approaches to therapy has accelerated in recent years. Unfortunately, many potential therapeutic advances do not fulfil their promise when subjected to randomized controlled trials. It is therefore highly desirable to speed up the process of evaluating new treatment options, particularly in phase II and phase III trials. To help realize such an aim, in 2003, Royston and colleagues proposed a class of multi-arm, two-stage trial designs intended to eliminate poorly performing contenders at a first stage (point in time). Only treatments showing a predefined degree of advantage against a control treatment were allowed through to a second stage. Arms that survived the first-stage comparison on an intermediate outcome measure entered a second stage of patient accrual, culminating in comparisons against control on the definitive outcome measure. The intermediate outcome is typically on the causal pathway to the definitive outcome (i.e. the features that cause an intermediate event also tend to cause a definitive event), an example in cancer being progression-free and overall survival. Although the 2003 paper alluded to multi-arm trials, most of the essential design features concerned only two-arm trials. Here, we extend the two-arm designs to allow an arbitrary number of stages, thereby increasing flexibility by building in several 'looks' at the accumulating data. Such trials can terminate at any of the intermediate stages or the final stage.</p> <p>Methods</p> <p>We describe the trial design and the mathematics required to obtain the timing of the 'looks' and the overall significance level and power of the design. We support our results by extensive simulation studies. As an example, we discuss the design of the STAMPEDE trial in prostate cancer.</p> <p>Results</p> <p>The mathematical results on significance level and power are confirmed by the computer simulations. Our approach compares favourably with methodology based on beta spending functions and on monitoring only a primary outcome measure for lack of benefit of the new treatment.</p> <p>Conclusions</p> <p>The new designs are practical and are supported by theory. They hold considerable promise for speeding up the evaluation of new treatments in phase II and III trials.</p

    Interim analyses of data as they accumulate in laboratory experimentation

    Get PDF
    BACKGROUND: Techniques for interim analysis, the statistical analysis of results while they are still accumulating, are highly-developed in the setting of clinical trials. But in the setting of laboratory experiments such analyses are usually conducted secretly and with no provisions for the necessary adjustments of the Type I error-rate. DISCUSSION: Laboratory researchers, from ignorance or by design, often analyse their results before the final number of experimental units (humans, animals, tissues or cells) has been reached. If this is done in an uncontrolled fashion, the pejorative term 'peeking' has been applied. A statistical penalty must be exacted. This is because if enough interim analyses are conducted, and if the outcome of the trial is on the borderline between 'significant' and 'not significant', ultimately one of the analyses will result in the magical P = 0.05. I suggest that Armitage's technique of matched-pairs sequential analysis should be considered. The conditions for using this technique are ideal: almost unlimited opportunity for matched pairing, and a short time between commencement of a study and its completion. Both the Type I and Type II error-rates are controlled. And the maximum number of pairs necessary to achieve an outcome, whether P = 0.05 or P > 0.05, can be estimated in advance. SUMMARY: Laboratory investigators, if they are to be honest, must adjust the critical value of P if they analyse their data repeatedly. I suggest they should consider employing matched-pairs sequential analysis in designing their experiments

    Adaptive design methods in clinical trials – a review

    Get PDF
    In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular due to its flexibility and efficiency. Based on adaptations applied, adaptive designs can be classified into three categories: prospective, concurrent (ad hoc), and retrospective adaptive designs. An adaptive design allows modifications made to trial and/or statistical procedures of ongoing clinical trials. However, it is a concern that the actual patient population after the adaptations could deviate from the originally target patient population and consequently the overall type I error (to erroneously claim efficacy for an infective drug) rate may not be controlled. In addition, major adaptations of trial and/or statistical procedures of on-going trials may result in a totally different trial that is unable to address the scientific/medical questions the trial intends to answer. In this article, several commonly considered adaptive designs in clinical trials are reviewed. Impacts of ad hoc adaptations (protocol amendments), challenges in by design (prospective) adaptations, and obstacles of retrospective adaptations are described. Strategies for the use of adaptive design in clinical development of rare diseases are discussed. Some examples concerning the development of Velcade intended for multiple myeloma and non-Hodgkin's lymphoma are given. Practical issues that are commonly encountered when implementing adaptive design methods in clinical trials are also discussed

    Efficacy of the mRNA-1273 SARS-CoV-2 vaccine at completion of blinded phase

    Get PDF
    BACKGROUND At interim analysis in a phase 3, observer-blinded, placebo-controlled clinical trial, the mRNA-1273 vaccine showed 94.1% efficacy in preventing coronavirus disease 2019 (Covid-19). After emergency use of the vaccine was authorized, the protocol was amended to include an open-label phase. Final analyses of efficacy and safety data from the blinded phase of the trial are reported. METHODS We enrolled volunteers who were at high risk for Covid-19 or its complications; participants were randomly assigned in a 1:1 ratio to receive two intramuscular injections of mRNA-1273 (100 μg) or placebo, 28 days apart, at 99 centers across the United States. The primary end point was prevention of Covid-19 illness with onset at least 14 days after the second injection in participants who had not previously been infected with the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The data cutoff date was March 26, 2021. RESULTS The trial enrolled 30,415 participants; 15,209 were assigned to receive the mRNA-1273 vaccine, and 15,206 to receive placebo. More than 96% of participants received both injections, 2.3% had evidence of SARS-CoV-2 infection at baseline, and the median follow-up was 5.3 months in the blinded phase. Vaccine efficacy in preventing Covid-19 illness was 93.2% (95% confidence interval [CI], 91.0 to 94.8), with 55 confirmed cases in the mRNA-1273 group (9.6 per 1000 person-years; 95% CI, 7.2 to 12.5) and 744 in the placebo group (136.6 per 1000 person-years; 95% CI, 127.0 to 146.8). The efficacy in preventing severe disease was 98.2% (95% CI, 92.8 to 99.6), with 2 cases in the mRNA-1273 group and 106 in the placebo group, and the efficacy in preventing asymptomatic infection starting 14 days after the second injection was 63.0% (95% CI, 56.6 to 68.5), with 214 cases in the mRNA-1273 group and 498 in the placebo group. Vaccine efficacy was consistent across ethnic and racial groups, age groups, and participants with coexisting conditions. No safety concerns were identified. CONCLUSIONS The mRNA-1273 vaccine continued to be efficacious in preventing Covid-19 illness and severe disease at more than 5 months, with an acceptable safety profile, and protection against asymptomatic infection was observed
    corecore