180 research outputs found

    A Corpus for Evidence Based Medicine Summarisation

    Get PDF
    Background Automated text summarisers that find the best clinical evidence reported in collections of medical literature are of potential benefit for the practice of Evidence Based Medicine (EBM). Research and development of text summarisers for EBM, however, is impeded by the lack of corpora to train and test such systems. Aims To produce a corpus for research in EBM summarisation. Method We sourced the “Clinical Inquiries” section of the Journal of Family Practice (JFP) and obtained a sizeable sample of questions and evidence based summaries. We further processed the summaries by combining automated techniques, human annotations, and crowdsourcing techniques to identify the PubMed IDs of the references. Results The corpus has 456 questions, 1,396 answer components, 3,036 answer justifications, and 2,908 references. Conclusion The corpus is now available for the research community at http://sourceforge.net/projects/ebmsumcorpus

    Correcting crowdsourced annotations to improve detection of outcome types in evidence based medicine

    Get PDF
    The validity and authenticity of annotations in datasets massively influences the performance of Natural Language Processing (NLP) systems. In other words, poorly annotated datasets are likely to produce fatal results in at-least most NLP problems hence misinforming consumers of these models, systems or applications. This is a bottleneck in most domains, especially in healthcare where crowdsourcing is a popular strategy in obtaining annotations. In this paper, we present a framework that automatically corrects incorrectly captured annotations of outcomes, thereby improving the quality of the crowdsourced annotations. We investigate a publicly available dataset called EBM-NLP, built to power NLP tasks in support of Evidence based Medicine (EBM) primarily focusing on health outcomes

    A Span-based Model for Extracting Overlapping PICO Entities from RCT Publications

    Full text link
    Objectives Extraction of PICO (Populations, Interventions, Comparison, and Outcomes) entities is fundamental to evidence retrieval. We present a novel method PICOX to extract overlapping PICO entities. Materials and Methods PICOX first identifies entities by assessing whether a word marks the beginning or conclusion of an entity. Then it uses a multi-label classifier to assign one or more PICO labels to a span candidate. PICOX was evaluated using one of the best-performing baselines, EBM-NLP, and three more datasets, i.e., PICO-Corpus, and RCT publications on Alzheimer's Disease or COVID-19, using entity-level precision, recall, and F1 scores. Results PICOX achieved superior precision, recall, and F1 scores across the board, with the micro F1 score improving from 45.05 to 50.87 (p << 0.01). On the PICO-Corpus, PICOX obtained higher recall and F1 scores than the baseline and improved the micro recall score from 56.66 to 67.33. On the COVID-19 dataset, PICOX also outperformed the baseline and improved the micro F1 score from 77.10 to 80.32. On the AD dataset, PICOX demonstrated comparable F1 scores with higher precision when compared to the baseline. Conclusion PICOX excels in identifying overlapping entities and consistently surpasses a leading baseline across multiple datasets. Ablation studies reveal that its data augmentation strategy effectively minimizes false positives and improves precision

    Automation tools to support undertaking scoping reviews.

    Get PDF
    This paper describes several automation tools and software that can be considered during evidence synthesis projects and provides guidance for their integration in the conduct of scoping reviews. The guidance presented in this work is adapted from the results of a scoping review and consultations with the JBI Scoping Review Methodology group. This paper describes several reliable, validated automation tools and software that can be used to enhance the conduct of scoping reviews. Developments in the automation of systematic reviews, and more recently scoping reviews, are continuously evolving. We detail several helpful tools in order of the key steps recommended by the JBI's methodological guidance for undertaking scoping reviews including team establishment, protocol development, searching, de-duplication, screening titles and abstracts, data extraction, data charting, and report writing. While we include several reliable tools and software that can be used for the automation of scoping reviews, there are some limitations to the tools mentioned. For example, some are available in English only and their lack of integration with other tools results in limited interoperability. This paper highlighted several useful automation tools and software programs to use in undertaking each step of a scoping review. This guidance has the potential to inform collaborative efforts aiming at the development of evidence informed, integrated automation tools and software packages for enhancing the conduct of high-quality scoping reviews

    Cloud-based Meta-analysis to Bridge Science and Practice: Welcome to metaBUS

    Get PDF
    Although volumes have been written on spanning the science-practice gap in applied psychology, surprisingly few tangible components of that bridge have actually been constructed. We describe the metaBUS platform that addresses three challenges of one gap contributor: information overload. In particular, we describe challenges stemming from: (1) lack of access to research findings, (2) lack of an organizing map of topics studied, and (3) lack of interpretation guidelines for research findings. For each challenge, we show how metaBUS, which provides an advanced search and synthesis engine of currently more than 780,000 findings from 9,000 studies, can provide the building blocks needed to move beyond engineering design phase and toward construction, generating rapid, first-pass meta-analyses on virtually any topic to inform both research and practice. We provide an Internet link to access a preliminary version of the metaBUS interface and provide two brief demonstrations illustrating its functionality

    Mapping Design Contributions in Information Systems Research: The Design Research Activity Framework

    Get PDF
    Despite growing interest in design science research in information systems, our understanding about what constitutes a design contribution and the range of research activities that can produce design contributions remains limited. We propose the design research activity (DRA) framework for classifying design contributions based on the type of statements researchers use to express knowledge contributions and the researcher role with respect to the artifact. These dimensions combine to produce a DRA framework that contains four quadrants: construction, manipulation, deployment, and elucidation. We use the framework in two ways. First, we classify design contributions that the Journal of the Association for Information Systems (JAIS) published from 2007 to 2019 and show that the journal published a broad range of design research across all four quadrants. Second, we show how one can use our framework to analyze the maturity of design-oriented knowledge in a specific field as reflected in the degree of activity across the different quadrants. The DRA framework contributes by showing that design research encompasses both design science research and design-oriented behavioral research. The framework can help authors and reviewers assess research with design implications and help researchers position and understand design research as a journey through the four quadrants
    • …
    corecore