305 research outputs found

    Systematic Reviews of Genetic Association Studies

    Get PDF
    Gurdeep S. Sagoo and colleagues describe key components of the methodology for undertaking systematic reviews and meta-analyses of genetic association studies

    Data mining in clinical trial text: transformers for classification and question answering tasks

    Get PDF
    This research on data extraction methods applies recent advances in natural language processing to evidence synthesis based on medical texts. Texts of interest include abstracts of clinical trials in English and in multilingual contexts. The main focus is on information characterized via the Population, Intervention, Comparator, and Outcome (PICO) framework, but data extraction is not limited to these fields. Recent neural network architectures based on transformers show capacities for transfer learning and increased performance on downstream natural language processing tasks such as universal reading comprehension, brought forward by this architecture’s use of contextualized word embeddings and self-attention mechanisms. This paper contributes to solving problems related to ambiguity in PICO sentence prediction tasks, as well as highlighting how annotations for training named entity recognition systems are used to train a high-performing, but nevertheless flexible architecture for question answering in systematic review automation. Additionally, it demonstrates how the problem of insufficient amounts of training annotations for PICO entity extraction is tackled by augmentation. All models in this paper were created with the aim to support systematic review (semi)automation. They achieve high F1 scores, and demonstrate the feasibility of applying transformer-based classification methods to support data mining in the biomedical literature

    Systematic Reviews and Meta-Analyses in Neurosurgery Part I:Interpreting and Critically Appraising as a Guide for Clinical Practice

    Get PDF
    Neurosurgeons are inundated with the Herculean task to keep abreast with the rapid pace at which clinical research is proliferating. Systematic reviews and meta-analyses (SRMAs) have consequently surged in popularity because, when executed properly, they constitute the highest level of evidence, and may save busy neurosurgeons many hours of combing the literature. Well-executed SRMAs may prove instructive for clinical practice, but poorly conducted reviews sow confusion and may potentially cause harm. Unfortunately, many SRMAs within neurosurgery are relatively lackluster in methodological rigor. When neurosurgeons apply the results of a systematic review or meta-analysis to patient care, they should start by evaluating the extent to which the employed methods have likely protected against misleading results. The present article aims educate the reader about how to interpret an SRMA, to assess the potential relevance of its results in the special context of the neurosurgical patient population

    Machine learning to assist risk of bias assessments in systematic reviews

    Get PDF
    Background: Risk-of-bias assessments are now a standard component of systematic reviews. At present, reviewers need to manually identify relevant parts of research articles for a set of methodological elements that affect the risk of bias, in order to make a risk-of-bias judgement for each of these elements. We investigate the use of text mining methods to automate risk-of-bias assessments in systematic reviews. We aim to identify relevant sentences within the text of included articles, to rank articles by risk of bias and to reduce the number of risk-of-bias assessments that the reviewers need to perform by hand. Methods: We use supervised machine learning to train two types of models, for each of the three risk-of-bias properties of sequence generation, allocation concealment and blinding. The first model predicts whether a sentence in a research article contains relevant information. The second model predicts a risk-of-bias value for each research article. We use logistic regression, where each independent variable is the frequency of a word in a sentence or article, respectively. Results: We found that sentences can be successfully ranked by relevance with area under the receiver operating characteristic (ROC) curve (AUC) > 0.98. Articles can be ranked by risk of bias with AUC > 0.72. We estimate that more than 33% of articles can be assessed by just one reviewer, where two reviewers are normally required. Conclusions: We show that text mining can be used to assist risk-of-bias assessments

    CINeMA:Software for semi-automated assessment of the Confidence In the results of Network Meta-Analysis

    Get PDF
    Network meta‐analysis (NMA) compares several interventions that are linked in a network of comparative studies and estimates the relative treatment effects between all treatments, using both direct and indirect evidence. NMA is increasingly used for decision making in health care, however, a user‐friendly system to evaluate the confidence that can be placed in the results of NMA is currently lacking. This paper is a tutorial describing the Confidence In Network Meta‐Analysis (CINeMA) web application, which is based on the framework developed by Salanti et al (2014, PLOS One, 9, e99682) and refined by Nikolakopoulou et al (2019, bioRxiv). Six domains that affect the level of confidence in the NMA results are considered: (a) within‐study bias, (b) reporting bias, (c) indirectness, (d) imprecision, (e) heterogeneity, and (f) incoherence. CINeMA is freely available and open‐source and no login is required. In the configuration step users upload their data, produce network plots and define the analysis and effect measure. The dataset should include assessments of study‐level risk of bias and judgments on indirectness. CINeMA calls the netmeta routine in R to estimate relative effects and heterogeneity. Users are then guided through a systematic evaluation of the six domains. In this way reviewers assess the level of concerns for each relative treatment effect from NMA as giving rise to “no concerns,” “some concerns,” or “major concerns” in each of the six domains, which are graphically summarized on the report page for all effect estimates. Finally, judgments across the domains are summarized into a single confidence rating (“high,” “moderate,” “low,” or “very low”). In conclusion, the user‐friendly web‐based CINeMA platform provides a transparent framework to evaluate evidence from systematic reviews with multiple interventions
    corecore