21 research outputs found

    Temporal Trends and Clinical Trial Characteristics Associated with the Inclusion of Women in Heart Failure Trial Steering Committees:A Systematic Review

    Get PDF
    Background: Trial steering committees (TSCs) steer the conduct of randomized controlled trials (RCTs). We examined the gender composition of TSCs in impactful heart failure RCTs and explored whether trial leadership by a woman was independently associated with the inclusion of women in TSCs. Methods: We systematically searched MEDLINE, EMBASE, and CINAHL for heart failure RCTs published in journals with impact factor ≥10 between January 2000 and May 2019. We used the Jonckheere-Terpstra test to assess temporal trends and multivariable logistic regression to explore trial characteristics associated with TSC inclusion of women. Results: Of 403 RCTs that met inclusion criteria, 127 (31.5%) reported having a TSC but 20 of these (15.7%) did not identify members. Among 107 TSCs that listed members, 56 (52.3%) included women and 6 of these (10.7%) restricted women members to the RCT leaders. Of 1213 TSC members, 11.1% (95% CI, 9.4%-13.0%) were women, with no change in temporal trends (P=0.55). Women had greater odds of TSC inclusion in RCTs led by women (adjusted odds ratio, 2.48 [95% CI, 1.05-8.72], P=0.042); this association was nonsignificant when analysis excluded TSCs that restricted women to the RCT leaders (adjusted odds ratio 1.46 [95% CI, 0.43-4.91], P=0.36). Conclusions: Women were included in 52.3% of TSCs and represented 11.1% of TSC members in 107 heart failure RCTs, with no change in trends since 2000. RCTs led by women had higher adjusted odds of including women in TSCs, partly due to the self-inclusion of RCT leaders in TSCs

    A systematic review of clinical health conditions predicted by machine learning diagnostic and prognostic models trained or validated using real-world primary health care data.

    No full text
    With the advances in technology and data science, machine learning (ML) is being rapidly adopted by the health care sector. However, there is a lack of literature addressing the health conditions targeted by the ML prediction models within primary health care (PHC) to date. To fill this gap in knowledge, we conducted a systematic review following the PRISMA guidelines to identify health conditions targeted by ML in PHC. We searched the Cochrane Library, Web of Science, PubMed, Elsevier, BioRxiv, Association of Computing Machinery (ACM), and IEEE Xplore databases for studies published from January 1990 to January 2022. We included primary studies addressing ML diagnostic or prognostic predictive models that were supplied completely or partially by real-world PHC data. Studies selection, data extraction, and risk of bias assessment using the prediction model study risk of bias assessment tool were performed by two investigators. Health conditions were categorized according to international classification of diseases (ICD-10). Extracted data were analyzed quantitatively. We identified 106 studies investigating 42 health conditions. These studies included 207 ML prediction models supplied by the PHC data of 24.2 million participants from 19 countries. We found that 92.4% of the studies were retrospective and 77.3% of the studies reported diagnostic predictive ML models. A majority (76.4%) of all the studies were for models' development without conducting external validation. Risk of bias assessment revealed that 90.8% of the studies were of high or unclear risk of bias. The most frequently reported health conditions were diabetes mellitus (19.8%) and Alzheimer's disease (11.3%). Our study provides a summary on the presently available ML prediction models within PHC. We draw the attention of digital health policy makers, ML models developer, and health care professionals for more future interdisciplinary research collaboration in this regard

    Identifying and addressing conflicting results across multiple discordant systematic reviews on the same question:protocol for a replication study of the Jadad algorithm

    No full text
    INTRODUCTION: An increasing growth of systematic reviews (SRs) presents notable challenges for decision-makers seeking to answer clinical questions. In 1997, an algorithm was created by Jadad to assess discordance in results across SRs on the same question. Our study aims to (1) replicate assessments done in a sample of studies using the Jadad algorithm to determine if the same SR would have been chosen, (2) evaluate the Jadad algorithm in terms of utility, efficiency and comprehensiveness, and (3) describe how authors address discordance in results across multiple SRs. METHODS AND ANALYSIS: We will use a database of 1218 overviews (2000–2020) created from a bibliometric study as the basis of our search for studies assessing discordance (called discordant reviews). This bibliometric study searched MEDLINE (Ovid), Epistemonikos and Cochrane Database of Systematic Reviews for overviews. We will include any study using Jadad (1997) or another method to assess discordance. The first 30 studies screened at the full-text stage by two independent reviewers will be included. We will replicate the authors’ Jadad assessments. We will compare our outcomes qualitatively and evaluate the differences between our Jadad assessment of discordance and the authors’ assessment. ETHICS AND DISSEMINATION: No ethics approval was required as no human subjects were involved. In addition to publishing in an open-access journal, we will disseminate evidence summaries through formal and informal conferences, academic websites, and across social media platforms. This is the first study to comprehensively evaluate and replicate Jadad algorithm assessments of discordance across multiple SRs

    WISEST (WhIch Systematic Evidence Synthesis is besT) Survey

    No full text
    The purpose of this survey is to understand how you as a decision maker (e.g. student, clinician, researcher or policymaker) use systematic reviews in your decision making or learning. Specifically, when there are multiple systematic reviews on a particular question, do you pick one or more systematic reviews to use or read? Would you use a supporting tool with Artificial intelligence (AI) capability, if one was available, to help you assess the strengths and weaknesses of the systematic reviews on your topic of interest

    Protocol and plan for the development of the automated algorithm for choosing the best systematic review

    No full text
    We aim to develop an automated algorithm which will help clinicians and decision makers to help them choose between multiple SRs on the same clinical, public health or policy question. Our automated algorithm will have significant impact and application worldwide to every field of health research. We, as an academic group of methodologists and clinicians, would love to meet with anyone interested in partnering or funding our multi-year project. Please contact carole dot lunny at ubc dot c
    corecore