7,933 research outputs found

    Evidence Inference 2.0: More Data, Better Models

    Full text link
    How do we most effectively treat a disease or condition? Ideally, we could consult a database of evidence gleaned from clinical trials to answer such questions. Unfortunately, no such database exists; clinical trial results are instead disseminated primarily via lengthy natural language articles. Perusing all such articles would be prohibitively time-consuming for healthcare practitioners; they instead tend to depend on manually compiled systematic reviews of medical literature to inform care. NLP may speed this process up, and eventually facilitate immediate consult of published evidence. The Evidence Inference dataset was recently released to facilitate research toward this end. This task entails inferring the comparative performance of two treatments, with respect to a given outcome, from a particular article (describing a clinical trial) and identifying supporting evidence. For instance: Does this article report that chemotherapy performed better than surgery for five-year survival rates of operable cancers? In this paper, we collect additional annotations to expand the Evidence Inference dataset by 25\%, provide stronger baseline models, systematically inspect the errors that these make, and probe dataset quality. We also release an abstract only (as opposed to full-texts) version of the task for rapid model prototyping. The updated corpus, documentation, and code for new baselines and evaluations are available at http://evidence-inference.ebm-nlp.com/.Comment: Accepted as workshop paper into BioNLP Updated results from SciBERT to Biomed RoBERT

    Understanding Clinical Trial Reports: Extracting Medical Entities and Their Relations

    Full text link
    The best evidence concerning comparative treatment effectiveness comes from clinical trials, the results of which are reported in unstructured articles. Medical experts must manually extract information from articles to inform decision-making, which is time-consuming and expensive. Here we consider the end-to-end task of both (a) extracting treatments and outcomes from full-text articles describing clinical trials (entity identification) and, (b) inferring the reported results for the former with respect to the latter (relation extraction). We introduce new data for this task, and evaluate models that have recently achieved state-of-the-art results on similar tasks in Natural Language Processing. We then propose a new method motivated by how trial results are typically presented that outperforms these purely data-driven baselines. Finally, we run a fielded evaluation of the model with a non-profit seeking to identify existing drugs that might be re-purposed for cancer, showing the potential utility of end-to-end evidence extraction systems

    Systematic review and network meta-analysis with individual participant data on cord management at preterm birth (iCOMP): study protocol

    Get PDF
    Introduction Timing of cord clamping and other cord management strategies may improve outcomes at preterm birth. However, it is unclear whether benefits apply to all preterm subgroups. Previous and current trials compare various policies, including time-based or physiology-based deferred cord clamping, and cord milking. Individual participant data (IPD) enable exploration of different strategies within subgroups. Network meta-analysis (NMA) enables comparison and ranking of all available interventions using a combination of direct and indirect comparisons. Objectives (1) To evaluate the effectiveness of cord management strategies for preterm infants on neonatal mortality and morbidity overall and for different participant characteristics using IPD meta-analysis. (2) To evaluate and rank the effect of different cord management strategies for preterm births on mortality and other key outcomes using NMA. Methods and analysis Systematic searches of Medline, Embase, clinical trial registries, and other sources for all ongoing and completed randomised controlled trials comparing cord management strategies at preterm birth (before 37 weeks’ gestation) have been completed up to 13 February 2019, but will be updated regularly to include additional trials. IPD will be sought for all trials; aggregate summary data will be included where IPD are unavailable. First, deferred clamping and cord milking will be compared with immediate clamping in pairwise IPD meta-analyses. The primary outcome will be death prior to hospital discharge. Effect differences will be explored for prespecified participant subgroups. Second, all identified cord management strategies will be compared and ranked in an IPD NMA for the primary outcome and the key secondary outcomes. Treatment effect differences by participant characteristics will be identified. Inconsistency and heterogeneity will be explored. Ethics and dissemination Ethics approval for this project has been granted by the University of Sydney Human Research Ethics Committee (2018/886). Results will be relevant to clinicians, guideline developers and policy-makers, and will be disseminated via publications, presentations and media releases

    Mechanisms and the Evidence Hierarchy

    Get PDF
    Evidence-based medicine (EBM) makes use of explicit procedures for grading evidence for causal claims. Normally, these procedures categorise evidence of correlation produced by statistical trials as better evidence for a causal claim than evidence of mechanisms produced by other methods. We argue, in contrast, that evidence of mechanisms needs to be viewed as complementary to, rather than inferior to, evidence of correlation. In this paper we first set out the case for treating evidence of mechanisms alongside evidence of correlation in explicit protocols for evaluating evidence. Next we provide case studies which exemplify the ways in which evidence of mechanisms complements evidence of correlation in practice. Finally, we put forward some general considerations as to how the two sorts of evidence can be more closely integrated by EBM

    The problem of evaluating automated large-scale evidence aggregators

    Get PDF
    In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the appropriate balance between explicitly coded algorithms and implicit reasoning involved, for instance, in the packaging of input evidence? In short: What is the optimal degree of ‘automation’? On the positive side: We propose the ability to perform an adequate robustness analysis as the focal criterion, primarily because it directs efforts to what is most important, namely, the structure of the algorithm and the appropriate extent of automation. Moreover, where there are resource constraints on the aggregation process, one must also consider what balance between volume of evidence and accuracy in the treatment of individual evidence best facilitates inference. There is no prerogative to aggregate the total evidence available if this would in fact reduce overall accuracy

    Hermeneutic single-case efficacy design

    Get PDF
    In this article, I outline hermeneutic single-case efficacy design (HSCED), an interpretive approach to evaluating treatment causality in single therapy cases. This approach uses a mixture of quantitative and qualitative methods to create a network of evidence that first identifies direct demonstrations of causal links between therapy process and outcome and then evaluates plausible nontherapy explanations for apparent change in therapy. I illustrate the method with data from a depressed client who presented with unresolved loss and anger issues

    Knowledge Extraction and Prediction from Behavior Science Randomized Controlled Trials: A Case Study in Smoking Cessation

    Get PDF
    Due to the fast pace at which randomized controlled trials are published in the health domain, researchers, consultants and policymakers would benefit from more automatic ways to process them by both extracting relevant information and automating the meta-analysis processes. In this paper, we present a novel methodology based on natural language processing and reasoning models to 1) extract relevant information from RCTs and 2) predict potential outcome values on novel scenarios, given the extracted knowledge, in the domain of behavior change for smoking cessation
    corecore