21 research outputs found

    Statistical machines for trauma hospital outcomes research: Application to the PRospective, Observational, Multi-center Major trauma Transfusion (PROMMTT) study

    Get PDF
    Improving the treatment of trauma, a leading cause of death worldwide, is of great clinical and public health interest. This analysis introduces flexible statistical methods for estimating center-level effects on individual outcomes in the context of highly variable patient populations, such as those of the PRospective, Observational, Multi-center Major Trauma Transfusion study. Ten US level I trauma centers enrolled a total of 1,245 trauma patients who survived at least 30 minutes after admission and received at least one unit of red blood cells. Outcomes included death, multiple organ failure, substantial bleeding, and transfusion of blood products. The centers involved were classified as either large or small-volume based on the number of massive transfusion patients enrolled during the study period. We focused on estimation of parameters inspired by causal inference, specifically estimated impacts on patient outcomes related to the volume of the trauma hospital that treated them. We defined this association as the change in mean outcomes of interest that would be observed if, contrary to fact, subjects from large-volume sites were treated at small-volume sites (the effect of treatment among the treated). We estimated this parameter using three different methods, some of which use data-adaptive machine learning tools to derive the outcome models, minimizing residual confounding by reducing model misspecification. Differences between unadjusted and adjusted estimators sometimes differed dramatically, demonstrating the need to account for differences in patient characteristics in clinic comparisons. In addition, the estimators based on robust adjustment methods showed potential impacts of hospital volume. For instance, we estimated a survival benefit for patients who were treated at large-volume sites, which was not apparent in simpler, unadjusted comparisons. By removing arbitrary modeling decisions from the estimation process and concentrating on parameters that have more direct policy implications, these potentially automated approaches allow methodological standardization across similar comparativeness effectiveness studies

    Can Research Assessments Themselves Cause Bias in Behaviour Change Trials? A Systematic Review of Evidence from Solomon 4-Group Studies

    Get PDF
    BACKGROUND: The possible effects of research assessments on participant behaviour have attracted research interest, especially in studies with behavioural interventions and/or outcomes. Assessments may introduce bias in randomised controlled trials by altering receptivity to intervention in experimental groups and differentially impacting on the behaviour of control groups. In a Solomon 4-group design, participants are randomly allocated to one of four arms: (1) assessed experimental group; (2) unassessed experimental group (3) assessed control group; or (4) unassessed control group. This design provides a test of the internal validity of effect sizes obtained in conventional two-group trials by controlling for the effects of baseline assessment, and assessing interactions between the intervention and baseline assessment. The aim of this systematic review is to evaluate evidence from Solomon 4-group studies with behavioural outcomes that baseline research assessments themselves can introduce bias into trials. METHODOLOGY/PRINCIPAL FINDINGS: Electronic databases were searched, supplemented by citation searching. Studies were eligible if they reported appropriately analysed results in peer-reviewed journals and used Solomon 4-group designs in non-laboratory settings with behavioural outcome measures and sample sizes of 20 per group or greater. Ten studies from a range of applied areas were included. There was inconsistent evidence of main effects of assessment, sparse evidence of interactions with behavioural interventions, and a lack of convincing data in relation to the research question for this review. CONCLUSIONS/SIGNIFICANCE: There were too few high quality completed studies to infer conclusively that biases stemming from baseline research assessments do or do not exist. There is, therefore a need for new rigorous Solomon 4-group studies that are purposively designed to evaluate the potential for research assessments to cause bias in behaviour change trials

    Can Simply Answering Research Questions Change Behaviour? Systematic Review and Meta Analyses of Brief Alcohol Intervention Trials

    Get PDF
    BACKGROUND: Participant reports of their own behaviour are critical for the provision and evaluation of behavioural interventions. Recent developments in brief alcohol intervention trials provide an opportunity to evaluate longstanding concerns that answering questions on behaviour as part of research assessments may inadvertently influence it and produce bias. The study objective was to evaluate the size and nature of effects observed in randomized manipulations of the effects of answering questions on drinking behaviour in brief intervention trials. METHODOLOGY/PRINCIPAL FINDINGS: Multiple methods were used to identify primary studies. Between-group differences in total weekly alcohol consumption, quantity per drinking day and AUDIT scores were evaluated in random effects meta-analyses. Ten trials were included in this review, of which two did not provide findings for quantitative study, in which three outcomes were evaluated. Between-group differences were of the magnitude of 13.7 (-0.17 to 27.6) grams of alcohol per week (approximately 1.5 U.K. units or 1 standard U.S. drink) and 1 point (0.1 to 1.9) in AUDIT score. There was no difference in quantity per drinking day. CONCLUSIONS/SIGNIFICANCE: Answering questions on drinking in brief intervention trials appears to alter subsequent self-reported behaviour. This potentially generates bias by exposing non-intervention control groups to an integral component of the intervention. The effects of brief alcohol interventions may thus have been consistently under-estimated. These findings are relevant to evaluations of any interventions to alter behaviours which involve participant self-report

    Paternal exposure to Agent Orange and spina bifida: A meta-analysis

    No full text
    The objective of this study is to conduct a meta-analysis of published and unpublished studies that examine the association between Agent Orange (AO) exposure and the risk of spina bifida. Relevant studies were identified through a computerized literature search of Medline and Embase from 1966 to 2008; a review of the reference list of retrieved articles and conference proceedings; and by contacting researchers for unpublished studies. Both fixed-effects and random-effects models were used to pool the results of individual studies. The Cochrane Q test and index of heterogeneity (I(2)) were used to evaluate heterogeneity, and a funnel plot and Egger's test were used to evaluate publication bias. Seven studies, including two Vietnamese and five non-Vietnamese studies, involving 330 cases and 134,884 non-cases were included in the meta-analysis. The overall relative risk (RR) for spina bifida associated with paternal exposure to AO was 2.02 (95% confidence interval [CI]: 1.48-2.74), with no statistical evidence of heterogeneity across studies. Non-Vietnamese studies showed a slightly higher summary RR (RR = 2.22; 95% CI: 1.38-3.56) than Vietnamese studies (RR = 1.92 95% CI: 1.29-2.86). When analyzed separately, the overall association was statistically significant for the three case-control studies (Summary Odds Ratio = 2.25, 95% CI: 1.31-3.86) and the cross sectional study (RR = 1.97, 95% CI: 1.31-2.96), but not for the three cohort studies (RR: 2.11; 95% CI: 0.78-5.73). Paternal exposure to AO appears to be associated with a statistically increased risk of spina bifida
    corecore