39 research outputs found

    Analysis of multicenter clinical trials with very low event rates

    Get PDF
    INTRODUCTION: In a five-arm randomized clinical trial (RCT) with stratified randomization across 54 sites, we encountered low primary outcome event proportions, resulting in multiple sites with zero events either overall or in one or more study arms. In this paper, we systematically evaluated different statistical methods of accounting for center in settings with low outcome event proportions. METHODS: We conducted a simulation study and a reanalysis of a completed RCT to compare five popular methods of estimating an odds ratio for multicenter trials with stratified randomization by center: (i) no center adjustment, (ii) random intercept model, (iii) Mantel-Haenszel model, (iv) generalized estimating equation (GEE) with an exchangeable correlation structure, and (v) GEE with small sample correction (GEE-small sample correction). We varied the number of total participants (200, 500, 1000, 5000), number of centers (5, 50, 100), control group outcome percentage (2%, 5%, 10%), true odds ratio (1, > 1), intra-class correlation coefficient (ICC) (0.025, 0.075), and distribution of participants across the centers (balanced, skewed). RESULTS: Mantel-Haenszel methods generally performed poorly in terms of power and bias and led to the exclusion of participants from the analysis because some centers had no events. Failure to account for center in the analysis generally led to lower power and type I error rates than other methods, particularly with ICC = 0.075. GEE had an inflated type I error rate except in some settings with a large number of centers. GEE-small sample correction maintained the type I error rate at the nominal level but suffered from reduced power and convergence issues in some settings when the number of centers was small. Random intercept models generally performed well in most scenarios, except with a low event rate (i.e., 2% scenario) and small total sample size (n ≤ 500), when all methods had issues. DISCUSSION: Random intercept models generally performed best across most scenarios. GEE-small sample correction performed well when the number of centers was large. We do not recommend the use of Mantel-Haenszel, GEE, or models that do not account for center. When the expected event rate is low, we suggest that the statistical analysis plan specify an alternative method in the case of non-convergence of the primary method

    Randomly and Non-Randomly Missing Renal Function Data in the Strong Heart Study: A Comparison of Imputation Methods

    Get PDF
    We gratefully acknowledge Rachel Schaperow, MedStar Health Research Institute, for editing the manuscript.Disclaimer: The opinions expressed in this paper are those of the authors and do not necessarily reflect the views of the Indian Health Service.Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS). Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989–1991), 2 (1993–1995), and 3 (1998–1999) was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results.Yeshttp://www.plosone.org/static/editorial#pee

    Multi-messenger observations of a binary neutron star merger

    Get PDF
    On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ~1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40+8-8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 Mo. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ~40 Mpc) less than 11 hours after the merger by the One- Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ~9 and ~16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta

    Dark Energy Survey Year 1 results: Weak lensing mass calibration of redMaPPer galaxy clusters

    Get PDF
    We constrain the mass-richness scaling relation of redMaPPer galaxy clusters identified in the Dark Energy Survey Year 1 data using weak gravitational lensing.We split clusters into 4 × 3 bins of richness λ and redshift z for λ ≥ 20 and 0.2 ≤ z ≤ 0.65 and measure the mean masses of these bins using their stacked weak lensing signal. By modelling the scaling relation as 〈M200m|λ, z〉=M0(λ/40)F((1+z)/1.35)G,we constrain the normalization of the scaling relation at the 5.0 per cent level, finding M0 = [3.081 ± 0.075(stat) ± 0.133(sys)] · 1014M⊙ at λ = 40 and z = 0.35. The recovered richness scaling index is F = 1.356 ± 0.051 (stat) ± 0.008 (sys) and the redshift scaling index G = -0.30 ± 0.30 (stat) ± 0.06 (sys). These are the tightest measurements of the normalization and richness scaling index made to date from a weak lensing experiment. We use a semi-analytic covariance matrix to characterize the statistical errors in the recovered weak lensing profiles. Our analysis accounts for the following sources of systematic error: shear and photometric redshift errors, cluster miscentring, cluster member dilution of the source sample, systematic uncertainties in the modelling of the halo-mass correlation function, halo triaxiality, and projection effects.We discuss prospects for reducing our systematic error budget, which dominates the uncertainty on M0. Our result is in excellent agreement with, but has significantly smaller uncertainties than, previous measurements in the literature, and augurs well for the power of the DES cluster survey as a tool for precision cosmology and upcoming galaxy surveys such as LSST, Euclid, and WFIRST

    Sensitivity analysis after multiple imputation under missing at random: a weighting approach.

    No full text
    Multiple imputation (MI) is now well established as a flexible, general, method for the analysis of data sets with missing values. Most implementations assume the missing data are ;missing at random' (MAR), that is, given the observed data, the reason for the missing data does not depend on the unseen data. However, although this is a helpful and simplifying working assumption, it is unlikely to be true in practice. Assessing the sensitivity of the analysis to the MAR assumption is therefore important. However, there is very limited MI software for this. Further, analysis of a data set with missing values that are not missing at random (NMAR) is complicated by the need to extend the MAR imputation model to include a model for the reason for dropout. Here, we propose a simple alternative. We first impute under MAR and obtain parameter estimates for each imputed data set. The overall NMAR parameter estimate is a weighted average of these parameter estimates, where the weights depend on the assumed degree of departure from MAR. In some settings, this approach gives results that closely agree with joint modelling as the number of imputations increases. In others, it provides ball-park estimates of the results of full NMAR modelling, indicating the extent to which it is necessary and providing a check on its results. We illustrate our approach with a small simulation study, and the analysis of data from a trial of interventions to improve the quality of peer review

    A first step towards best practice recommendations for the design and statistical analyses of pragmatic clinical trials: a modified Delphi approach

    No full text
    Aim: Pragmatic clinical trials (PCTs) are randomized trials implemented through routine clinical practice, where design parameters of traditional randomized controlled trials are modified to increase generalizability. However, this may introduce statistical challenges. We aimed to identify these challenges and discuss possible solutions leading to best practice recommendations for the design and analysis of PCTs. Methods: A modified Delphi method was used to reach consensus among a panel of 11 experts in clinical trials and statistics. Statistical issues were identified in a focused literature review and aggregated with insights and possible solutions from experts collected through a series of survey iterations. Issues were ranked according to their importance. Results: Twenty-seven articles were included and combined with experts' insight to generate a list of issues categorized into participants, recruiting sites, randomization, blinding and intervention, outcome (selection and measurement) and data analysis. Consensus was reached about the most important issues: risk of participants' attrition, heterogeneity of “usual care” across sites, absence of blinding, use of a subjective endpoint and data analysis aligned with the trial estimand. Potential issues should be anticipated and preferably be addressed in the trial protocol. The experts provided solutions regarding data collection and data analysis, which were considered of equal importance. Discussion: A set of important statistical issues in PCTs was identified and approaches were suggested to anticipate and/or minimize these through data analysis. Any impact of choosing a pragmatic design feature should be gauged in the light of the trial estimand
    corecore