73 research outputs found

    A comparison of methods for analysing multiple outcome measures in randomised controlled trials using a simulation study

    Get PDF
    Multiple primary outcomes are sometimes collected and analysed in randomised controlled trials (RCTs), and are used in favour of a single outcome. By collecting multiple primary outcomes, it is possible to fully evaluate the effect that an intervention has for a given disease process. A simple approach to analysing multiple outcomes is to consider each outcome separately, however, this approach does not account for any pairwise correlations between the outcomes. Any cases with missing values must be ignored, unless an additional imputation step is performed. Alternatively, multivariate methods that explicitly model the pairwise correlations between the outcomes may be more efficient when some of the outcomes have missing values. In this paper, we present an overview of relevant methods that can be used to analyse multiple outcome measures in RCTs, including methods based on multivariate multilevel (MM) models. We perform simulation studies to evaluate the bias in the estimates of the intervention effects and the power of detecting true intervention effects observed when using selected methods. Different simulation scenarios were constructed by varying the number of outcomes, the type of outcomes, the degree of correlations between the outcomes and the proportions and mechanisms of missing data. We compare multivariate methods to univariate methods with and without multiple imputation. When there are strong correlations between the outcome measures (ρ > .4), our simulation studies suggest that there are small power gains when using the MM model when compared to analysing the outcome measures separately. In contrast, when there are weak correlations (ρ < .4), the power is reduced when using univariate methods with multiple imputation when compared to analysing the outcome measures separately

    Risk prediction in multicentre studies when there is confounding by cluster or informative cluster size

    Get PDF
    BACKGROUND: Clustered data arise in research when patients are clustered within larger units. Generalised Estimating Equations (GEE) and Generalised Linear Models (GLMM) can be used to provide marginal and cluster-specific inference and predictions, respectively. METHODS: Confounding by Cluster (CBC) and Informative cluster size (ICS) are two complications that may arise when modelling clustered data. CBC can arise when the distribution of a predictor variable (termed ‘exposure’), varies between clusters causing confounding of the exposure-outcome relationship. ICS means that the cluster size conditional on covariates is not independent of the outcome. In both situations, standard GEE and GLMM may provide biased or misleading inference, and modifications have been proposed. However, both CBC and ICS are routinely overlooked in the context of risk prediction, and their impact on the predictive ability of the models has been little explored. We study the effect of CBC and ICS on the predictive ability of risk models for binary outcomes when GEE and GLMM are used. We examine whether two simple approaches to handle CBC and ICS, which involve adjusting for the cluster mean of the exposure and the cluster size, respectively, can improve the accuracy of predictions. RESULTS: Both CBC and ICS can be viewed as violations of the assumptions in the standard GLMM; the random effects are correlated with exposure for CBC and cluster size for ICS. Based on these principles, we simulated data subject to CBC/ICS. The simulation studies suggested that the predictive ability of models derived from using standard GLMM and GEE ignoring CBC/ICS was affected. Marginal predictions were found to be mis-calibrated. Adjusting for the cluster-mean of the exposure or the cluster size improved calibration, discrimination and the overall predictive accuracy of marginal predictions, by explaining part of the between cluster variability. The presence of CBC/ICS did not affect the accuracy of conditional predictions. We illustrate these concepts using real data from a multicentre study with potential CBC. CONCLUSION: Ignoring CBC and ICS when developing prediction models for clustered data can affect the accuracy of marginal predictions. Adjusting for the cluster mean of the exposure or the cluster size can improve the predictive accuracy of marginal predictions

    Are multiple primary outcomes analysed appropriately in randomised controlled trials? A review

    Get PDF
    To review how multiple primary outcomes are currently considered in the analysis of randomised controlled trials. We briefly describe the methods available to safeguard the inferences and to raise awareness of the potential problems caused by multiple outcomes

    Monetary costs of agitation in older adults with Alzheimer's disease in the UK: prospective cohort study

    Get PDF
    While nearly half of all people with Alzheimer's disease (AD) have agitation symptoms every month, little is known about the costs of agitation in AD. We calculated the monetary costs associated with agitation in older adults with AD in the UK from a National Health Service and personal social services perspective

    Generic, simple risk stratification model for heart valve surgery

    Get PDF
    BACKGROUND: Heart valve surgery has an associated in-hospital mortality rate of 4% to 8%. This study aims to develop a simple risk model to predict the risk of in-hospital mortality for patients undergoing heart valve surgery to provide information to patients and clinicians and to facilitate institutional comparisons.METHODS AND RESULTS: Data on 32 839 patients were obtained from the Society of Cardiothoracic Surgeons of Great Britain and Ireland on patients who underwent heart valve surgery between April 1995 and March 2003. Data from the first 5 years (n=16 679) were used to develop the model; its performance was evaluated on the remaining data ( n=16 160). The risk model presented here is based on the combined data. The overall in-hospital mortality was 6.4%. The risk model included, in order of importance (all P < 0.01), operative priority, age, renal failure, operation sequence, ejection fraction, concomitant tricuspid valve surgery, type of valve operation, concomitant CABG surgery, body mass index, preoperative arrhythmias, diabetes, gender, and hypertension. The risk model exhibited good predictive ability (Hosmer-Lemeshow test, P=0.78) and discriminated between high- and low-risk patients reasonably well (receiver-operating characteristics curve area, 0.77). CONCLUSIONS: This is the first risk model that predicts in-hospital mortality for aortic and/or mitral heart valve patients with or without concomitant CABG. Based on a large national database of heart valve patients, this model has been evaluated successfully on patients who had valve surgery during a subsequent time period. It is simple to use, includes routinely collected variables, and provides a useful tool for patient advice and institutional comparisons

    Methods to adjust for multiple comparisons in the analysis and sample size calculation of randomised controlled trials with multiple primary outcomes

    Get PDF
    BACKGROUND: Multiple primary outcomes may be specified in randomised controlled trials (RCTs). When analysing multiple outcomes it's important to control the family wise error rate (FWER). A popular approach to do this is to adjust the p-values corresponding to each statistical test used to investigate the intervention effects by using the Bonferroni correction. It's also important to consider the power of the trial to detect true intervention effects. In the context of multiple outcomes, depending on the clinical objective, the power can be defined as: 'disjunctive power', the probability of detecting at least one true intervention effect across all the outcomes or 'marginal power' the probability of finding a true intervention effect on a nominated outcome. We provide practical recommendations on which method may be used to adjust for multiple comparisons in the sample size calculation and the analysis of RCTs with multiple primary outcomes. We also discuss the implications on the sample size for obtaining 90% disjunctive power and 90% marginal power. METHODS: We use simulation studies to investigate the disjunctive power, marginal power and FWER obtained after applying Bonferroni, Holm, Hochberg, Dubey/Armitage-Parmar and Stepdown-minP adjustment methods. Different simulation scenarios were constructed by varying the number of outcomes, degree of correlation between the outcomes, intervention effect sizes and proportion of missing data. RESULTS: The Bonferroni and Holm methods provide the same disjunctive power. The Hochberg and Hommel methods provide power gains for the analysis, albeit small, in comparison to the Bonferroni method. The Stepdown-minP procedure performs well for complete data. However, it removes participants with missing values prior to the analysis resulting in a loss of power when there are missing data. The sample size requirement to achieve the desired disjunctive power may be smaller than that required to achieve the desired marginal power. The choice between whether to specify a disjunctive or marginal power should depend on the clincial objective

    Seroprevalence of SARS-CoV-2 antibodies in people with an acute loss in their sense of smell and/or taste in a community-based population in London, UK: An observational cohort study

    Get PDF
    BACKGROUND: Loss of smell and taste are commonly reported symptoms associated with coronavirus disease 2019 (COVID-19); however, the seroprevalence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) antibodies in people with acute loss of smell and/or taste is unknown. The study aimed to determine the seroprevalence of SARS-CoV-2 antibodies in a community-based population with acute loss of smell and/or taste and to compare the frequency of COVID-19 associated symptoms in participants with and without SARS-CoV-2 antibodies. It also evaluated whether smell or taste loss are indicative of COVID-19 infection. METHODS AND FINDINGS: Text messages, sent via primary care centers in London, United Kingdom, invited people with loss of smell and/or taste in the preceding month, to participate. Recruitment took place between 23 April 2020 and 14 May 2020. A total of 590 participants enrolled via a web-based platform and responded to questions about loss of smell and taste and other COVID-19-related symptoms. Mean age was 39.4 years (SD ± 12.0) and 69.1% (n = 392) of participants were female. A total of 567 (96.1%) had a telemedicine consultation during which their COVID-19-related symptoms were verified and a lateral flow immunoassay test that detected SARS-CoV-2 immunoglobulin G (IgG) and immunoglobulin M (IgM) antibodies was undertaken under medical supervision. A total of 77.6% of 567 participants with acute smell and/or taste loss had SARS-CoV-2 antibodies; of these, 39.8% (n = 175) had neither cough nor fever. New loss of smell was more prevalent in participants with SARS-CoV-2 antibodies, compared with those without antibodies (93.4% versus 78.7%, p < 0.001), whereas taste loss was equally prevalent (90.2% versus 89.0%, p = 0.738). Seropositivity for SARS-CoV-2 was 3 times more likely in participants with smell loss (OR 2.86; 95% CI 1.27-6.36; p < 0.001) compared with those with taste loss. The limitations of this study are the lack of a general population control group, the self-reported nature of the smell and taste changes, and the fact our methodology does not take into account the possibility that a population subset may not seroconvert to develop SARS-CoV-2 antibodies post-COVID-19. CONCLUSIONS: Our findings suggest that recent loss of smell is a highly specific COVID-19 symptom and should be considered more generally in guiding case isolation, testing, and treatment of COVID-19. TRIALS REGISTRATION: ClinicalTrials.gov NCT04377815

    Estimation of required sample size for external validation of risk models for binary outcomes

    Get PDF
    Risk-prediction models for health outcomes are used in practice as part of clinical decision-making, and it is essential that their performance be externally validated. An important aspect in the design of a validation study is choosing an adequate sample size. In this paper, we investigate the sample size requirements for validation studies with binary outcomes to estimate measures of predictive performance (C-statistic for discrimination and calibration slope and calibration in the large). We aim for sufficient precision in the estimated measures. In addition, we investigate the sample size to achieve sufficient power to detect a difference from a target value. Under normality assumptions on the distribution of the linear predictor, we obtain simple estimators for sample size calculations based on the measures above. Simulation studies show that the estimators perform well for common values of the C-statistic and outcome prevalence when the linear predictor is marginally Normal. Their performance deteriorates only slightly when the normality assumptions are violated. We also propose estimators which do not require normality assumptions but require specification of the marginal distribution of the linear predictor and require the use of numerical integration. These estimators were also seen to perform very well under marginal normality. Our sample size equations require a specified standard error (SE) and the anticipated C-statistic and outcome prevalence. The sample size requirement varies according to the prognostic strength of the model, outcome prevalence, choice of the performance measure and study objective. For example, to achieve an SE < 0.025 for the C-statistic, 60-170 events are required if the true C-statistic and outcome prevalence are between 0.64-0.85 and 0.05-0.3, respectively. For the calibration slope and calibration in the large, achieving SE < 0.15   would require 40-280 and 50-100 events, respectively. Our estimators may also be used for survival outcomes when the proportion of censored observations is high

    A brief intervention for weight control based on habit-formation theory delivered through primary care: results from a randomised controlled trial

    Get PDF
    Background: Primary care is the 'first port of call' for weight control advice, creating a need for simple, effective interventions that can be delivered without specialist skills. Ten Top Tips (10TT) is a leaflet based on habit-formation theory that could fill this gap. The aim of the current study was to test the hypothesis that 10TT can achieve significantly greater weight loss over 3 months than ‘usual care’. Methods: A two-arm, individually randomised, controlled trial in primary care. Adults with obesity were identified from 14 primary care providers across England. Patients were randomised to either 10TT or 'usual care' and followed up at 3, 6, 12, 18 and 24 months. The primary outcome was weight loss at 3 months, assessed by a health professional blinded to group allocation. Difference between arms was assessed using a mixed-effect linear model taking into account the health professionals delivering 10TT, and adjusted for baseline weight. Secondary outcomes included body mass index, waist circumference, the number achieving a 5% weight reduction, clinical markers for potential comorbidities, weight loss over 24 months and basic costs. Results: Five-hundred and thirty-seven participants were randomised to 10TT (n=267) or to ‘usual care' (n=270). Data were available for 389 (72%) participants at 3 months and for 312 (58%) at 24 months. Participants receiving 10TT lost significantly more weight over 3 months than those receiving usual care (mean difference =−0.87kg; 95% confidence interval: −1.47 to −0.27; P=0.004). At 24 months, the 10TT group had maintained their weight loss, but the ‘usual care’ group had lost a similar amount. The basic cost of 10TT was low, that is, around £23 ($32) per participant. Conclusions: The 10TT leaflet delivered through primary care is effective in the short-term and a low-cost option over the longer term. It is the first habit-based intervention to be used in a health service setting and offers a low-intensity alternative to ‘usual care’

    Non-pharmacological interventions for agitation in dementia: systematic review of randomised controlled trials.

    Get PDF
    Background Agitation in dementia is common, persistent and distressing and can lead to care breakdown. Medication is often ineffective and harmful. Aims To systematically review randomised controlled trial evidence regarding non-pharmacological interventions. Method We reviewed 33 studies fitting predetermined criteria, assessed their validity and calculated standardised effect sizes (SES). Results Person-centred care, communication skills training and adapted dementia care mapping decreased symptomatic and severe agitation in care homes immediately (SES range 0.3-1.8) and for up to 6 months afterwards (SES range 0.2-2.2). Activities and music therapy by protocol (SES range 0.5-0.6) decreased overall agitation and sensory intervention decreased clinically significant agitation immediately. Aromatherapy and light therapy did not demonstrate efficacy. Conclusions There are evidence-based strategies for care homes. Future interventions should focus on consistent and long-term implementation through staff training. Further research is needed for people living in their own homes
    corecore