50 research outputs found

    Ongoing monitoring of data clustering in multicenter studies

    Get PDF
    Background: Multicenter study designs have several advantages, but the possibility of non-random measurement error resulting from procedural differences between the centers is a special concern. While it is possible to address and correct for some measurement error through statistical analysis, proactive data monitoring is essential to ensure high-quality data collection. Methods: In this article, we describe quality assurance efforts aimed at reducing the effect of measurement error in a recent follow-up of a large cluster-randomized controlled trial through periodic evaluation of intraclass correlation coefficients (ICCs) for continuous measurements. An ICC of 0 indicates the variance in the data is not due to variation between the centers, and thus the data are not clustered by center. Results: Through our review of early data downloads, we identified several outcomes (including sitting height, waist circumference, and systolic blood pressure) with higher than expected ICC values. Further investigation revealed variations in the procedures used by pediatricians to measure these outcomes. We addressed these procedural inconsistencies through written clarification of the protocol and refresher training workshops with the pediatricians. Further data monitoring at subsequent downloads showed that these efforts had a beneficial effect on data quality (sitting height ICC decreased from 0.92 to 0.03, waist circumference from 0.10 to 0.07, and systolic blood pressure from 0.16 to 0.12). Conclusions: We describe a simple but formal mechanism for identifying ongoing problems during data collection. The calculation of the ICC can easily be programmed and the mechanism has wide applicability, not just to cluster randomized controlled trials but to any study with multiple centers or with multiple observers

    Design effect in multicenter studies: gain or loss of power?

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In a multicenter trial, responses for subjects belonging to a common center are correlated. Such a clustering is usually assessed through the design effect, defined as a ratio of two variances. The aim of this work was to describe and understand situations where the design effect involves a gain or a loss of power.</p> <p>Methods</p> <p>We developed a design effect formula for a multicenter study aimed at testing the effect of a binary factor (which thus defines two groups) on a continuous outcome, and explored this design effect for several designs (from individually stratified randomized trials to cluster randomized trials, and for other designs such as matched pair designs or observational multicenter studies).</p> <p>Results</p> <p>The design effect depends on the intraclass correlation coefficient (ICC) (which assesses the correlation between data for two subjects from the same center) but also on a statistic <it>S</it>, which quantifies the heterogeneity of the group distributions among centers (thus the level of association between the binary factor and the center) and on the degree of global imbalance (the number of subjects are then different) between the two groups. This design effect may induce either a loss or a gain in power, depending on whether the <it>S </it>statistic is respectively higher or lower than 1.</p> <p>Conclusion</p> <p>We provided a global design effect formula applying for any multicenter study and allowing identifying factors – the ICC and the distribution of the group proportions among centers – that are associated with a gain or a loss of power in such studies.</p

    A systematic review of the use of an expertise-based randomised controlled trial design

    Get PDF
    Acknowledgements JAC held a Medical Research Council UK methodology (G1002292) fellowship, which supported this research. The Health Services Research Unit, Institute of Applied Health Sciences (University of Aberdeen), is core-funded by the Chief Scientist Office of the Scottish Government Health and Social Care Directorates. Views express are those of the authors and do not necessarily reflect the views of the funders.Peer reviewedPublisher PD

    Protocol for a randomised controlled trial for Reducing Arthritis Fatigue by clinical Teams (RAFT) using cognitive-behavioural approaches

    Get PDF
    Introduction: Rheumatoid arthritis (RA) fatigue is distressing, leading to unmanageable physical and cognitive exhaustion impacting on health, leisure and work. Group cognitive-behavioural (CB) therapy delivered by a clinical psychologist demonstrated large improvements in fatigue impact. However, few rheumatology teams include a clinical psychologist, therefore, this study aims to examine whether conventional rheumatology teams can reproduce similar results, potentially widening intervention availability. Methods and analysis: This is a multicentre, randomised, controlled trial of a group CB intervention for RA fatigue self-management, delivered by local rheumatology clinical teams. 7 centres will each recruit 4 consecutive cohorts of 10-16 patients with RA (fatigue severity ≥6/10). After consenting, patients will have baseline assessments, then usual care (fatigue self-management booklet, discussed for 5-6 min), then be randomised into control (no action) or intervention arms. The intervention, Reducing Arthritis Fatigue by clinical Teams (RAFT) will be cofacilitated by two local rheumatology clinicians (eg, nurse/occupational therapist), who will have had brief training in CB approaches, a RAFT manual and materials, and delivered an observed practice course. Groups of 5-8 patients will attend 6×2 h sessions (weeks 1-6) and a 1 hr consolidation session (week 14) addressing different self-management topics and behaviours. The primary outcome is fatigue impact (26 weeks); secondary outcomes are fatigue severity, coping and multidimensional impact, quality of life, clinical and mood status (to week 104). Statistical and health economic analyses will follow a predetermined plan to establish whether the intervention is clinically and costeffective. Effects of teaching CB skills to clinicians will be evaluated qualitatively. Ethics and dissemination: Approval was given by an NHS Research Ethics Committee, and participants will provide written informed consent. The copyrighted RAFT package will be freely available. Findings will be submitted to the National Institute for Health and Care Excellence, Clinical Commissioning Groups and all UK rheumatology departments

    Reproducibility of preclinical animal research improves with heterogeneity of study samples

    Get PDF
    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research

    Using PET with 18F-AV-45 (florbetapir) to quantify brain amyloid load in a clinical environment

    Get PDF
    International audiencePURPOSE: Positron emission tomography (PET) imaging of brain amyloid load has been suggested as a core biomarker for Alzheimer's disease (AD). The aim of this study was to test the feasibility of using PET imaging with (18)F-AV-45 (florbetapir) in a routine clinical environment to differentiate between patients with mild to moderate AD and mild cognitive impairment (MCI) from normal healthy controls (HC). METHODS: In this study, 46 subjects (20 men and 26 women, mean age of 69.0 ± 7.6 years), including 13 with AD, 12 with MCI and 21 HC subjects, were enrolled from three academic memory clinics. PET images were acquired over a 10-min period 50 min after injection of florbetapir (mean ± SD of radioactivity injected, 259 ± 57 MBq). PET images were assessed visually by two individuals blinded to any clinical information and quantitatively via the standard uptake value ratio (SUVr) in the specific regions of interest, which were defined in relation to the cerebellum as the reference region. RESULTS: The mean values of SUVr were higher in AD patients (median 1.20, Q1-Q3 1.16-1.30) than in HC subjects (median 1.05, Q1-Q3 1.04-1.08; p = 0.0001) in the overall cortex and all cortical regions (precuneus, anterior and posterior cingulate, and frontal median, temporal, parietal and occipital cortex). The MCI subjects also showed a higher uptake of florbetapir in the posterior cingulate cortex (median 1.06, Q1-Q3 0.97-1.28) compared with HC subjects (median 0.95, Q1-Q3 0.82-1.02; p = 0.03). Qualitative visual assessment of the PET scans showed a sensitivity of 84.6% (95% CI 0.55-0.98) and a specificity of 38.1% (95% CI 0.18-0.62) for discriminating AD patients from HC subjects; however, the quantitative assessment of the global cortex SUVr showed a sensitivity of 92.3% and specificity of 90.5% with a cut-off value of 1.122 (area under the curve 0.894). CONCLUSION: These preliminary results suggest that PET with florbetapir is a safe and suitable biomarker for AD that can be used routinely in a clinical environment. However, the low specificity of the visual PET scan assessment could be improved by the use of specific training and automatic or semiautomatic quantification tools

    Understanding the cluster randomised crossover design::a graphical illustraton of the components of variation and a sample size tutorial

    Get PDF
    Abstract Background In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or ‘cluster’ of individuals. Each cluster receives each intervention in a separate period of time, forming ‘cluster-periods’. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Methods Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society – Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). Results The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of the WPC or BPC can increase the required number of clusters. Conclusions By illustrating how the parameters required for sample size calculations arise from the CRXO design and by providing guidance on both how to choose values for the parameters and perform the sample size calculations, the implementation of the sample size formulae for CRXO trials may improve
    corecore