18 research outputs found

    Identifying and addressing conflicting results across multiple discordant systematic reviews on the same question:protocol for a replication study of the Jadad algorithm

    No full text
    INTRODUCTION: An increasing growth of systematic reviews (SRs) presents notable challenges for decision-makers seeking to answer clinical questions. In 1997, an algorithm was created by Jadad to assess discordance in results across SRs on the same question. Our study aims to (1) replicate assessments done in a sample of studies using the Jadad algorithm to determine if the same SR would have been chosen, (2) evaluate the Jadad algorithm in terms of utility, efficiency and comprehensiveness, and (3) describe how authors address discordance in results across multiple SRs. METHODS AND ANALYSIS: We will use a database of 1218 overviews (2000–2020) created from a bibliometric study as the basis of our search for studies assessing discordance (called discordant reviews). This bibliometric study searched MEDLINE (Ovid), Epistemonikos and Cochrane Database of Systematic Reviews for overviews. We will include any study using Jadad (1997) or another method to assess discordance. The first 30 studies screened at the full-text stage by two independent reviewers will be included. We will replicate the authors’ Jadad assessments. We will compare our outcomes qualitatively and evaluate the differences between our Jadad assessment of discordance and the authors’ assessment. ETHICS AND DISSEMINATION: No ethics approval was required as no human subjects were involved. In addition to publishing in an open-access journal, we will disseminate evidence summaries through formal and informal conferences, academic websites, and across social media platforms. This is the first study to comprehensively evaluate and replicate Jadad algorithm assessments of discordance across multiple SRs

    WISEST (WhIch Systematic Evidence Synthesis is besT) Survey

    No full text
    The purpose of this survey is to understand how you as a decision maker (e.g. student, clinician, researcher or policymaker) use systematic reviews in your decision making or learning. Specifically, when there are multiple systematic reviews on a particular question, do you pick one or more systematic reviews to use or read? Would you use a supporting tool with Artificial intelligence (AI) capability, if one was available, to help you assess the strengths and weaknesses of the systematic reviews on your topic of interest

    Protocol and plan for the development of the automated algorithm for choosing the best systematic review

    No full text
    We aim to develop an automated algorithm which will help clinicians and decision makers to help them choose between multiple SRs on the same clinical, public health or policy question. Our automated algorithm will have significant impact and application worldwide to every field of health research. We, as an academic group of methodologists and clinicians, would love to meet with anyone interested in partnering or funding our multi-year project. Please contact carole dot lunny at ubc dot c
    corecore