7,494 research outputs found

    Mapping Unparalleled Clinical Professional and Consumer Languages with Embedding Alignment

    Full text link
    Mapping and translating professional but arcane clinical jargons to consumer language is essential to improve the patient-clinician communication. Researchers have used the existing biomedical ontologies and consumer health vocabulary dictionary to translate between the languages. However, such approaches are limited by expert efforts to manually build the dictionary, which is hard to be generalized and scalable. In this work, we utilized the embeddings alignment method for the word mapping between unparalleled clinical professional and consumer language embeddings. To map semantically similar words in two different word embeddings, we first independently trained word embeddings on both the corpus with abundant clinical professional terms and the other with mainly healthcare consumer terms. Then, we aligned the embeddings by the Procrustes algorithm. We also investigated the approach with the adversarial training with refinement. We evaluated the quality of the alignment through the similar words retrieval both by computing the model precision and as well as judging qualitatively by human. We show that the Procrustes algorithm can be performant for the professional consumer language embeddings alignment, whereas adversarial training with refinement may find some relations between two languages.Comment: Accepted by 2018 KDD Workshop on Machine Learning for Medicine and Healthcar

    Word-Level Loss Extensions for Neural Temporal Relation Classification

    Full text link
    Unsupervised pre-trained word embeddings are used effectively for many tasks in natural language processing to leverage unlabeled textual data. Often these embeddings are either used as initializations or as fixed word representations for task-specific classification models. In this work, we extend our classification model's task loss with an unsupervised auxiliary loss on the word-embedding level of the model. This is to ensure that the learned word representations contain both task-specific features, learned from the supervised loss component, and more general features learned from the unsupervised loss component. We evaluate our approach on the task of temporal relation extraction, in particular, narrative containment relation extraction from clinical records, and show that continued training of the embeddings on the unsupervised objective together with the task objective gives better task-specific embeddings, and results in an improvement over the state of the art on the THYME dataset, using only a general-domain part-of-speech tagger as linguistic resource.Comment: Accepted at the 27th International Conference on Computational Linguistics (COLING 2018

    Towards Language Agnostic Universal Representations

    Full text link
    When a bilingual student learns to solve word problems in math, we expect the student to be able to solve these problem in both languages the student is fluent in,even if the math lessons were only taught in one language. However, current representations in machine learning are language dependent. In this work, we present a method to decouple the language from the problem by learning language agnostic representations and therefore allowing training a model in one language and applying to a different one in a zero shot fashion. We learn these representations by taking inspiration from linguistics and formalizing Universal Grammar as an optimization process (Chomsky, 2014; Montague, 1970). We demonstrate the capabilities of these representations by showing that the models trained on a single language using language agnostic representations achieve very similar accuracies in other languages

    Clinically Accurate Chest X-Ray Report Generation

    Full text link
    The automatic generation of radiology reports given medical radiographs has significant potential to operationally and improve clinical patient care. A number of prior works have focused on this problem, employing advanced methods from computer vision and natural language generation to produce readable reports. However, these works often fail to account for the particular nuances of the radiology domain, and, in particular, the critical importance of clinical accuracy in the resulting generated reports. In this work, we present a domain-aware automatic chest X-ray radiology report generation system which first predicts what topics will be discussed in the report, then conditionally generates sentences corresponding to these topics. The resulting system is fine-tuned using reinforcement learning, considering both readability and clinical accuracy, as assessed by the proposed Clinically Coherent Reward. We verify this system on two datasets, Open-I and MIMIC-CXR, and demonstrate that our model offers marked improvements on both language generation metrics and CheXpert assessed accuracy over a variety of competitive baselines

    Medical Image Generation using Generative Adversarial Networks

    Full text link
    Generative adversarial networks (GANs) are unsupervised Deep Learning approach in the computer vision community which has gained significant attention from the last few years in identifying the internal structure of multimodal medical imaging data. The adversarial network simultaneously generates realistic medical images and corresponding annotations, which proven to be useful in many cases such as image augmentation, image registration, medical image generation, image reconstruction, and image-to-image translation. These properties bring the attention of the researcher in the field of medical image analysis and we are witness of rapid adaption in many novel and traditional applications. This chapter provides state-of-the-art progress in GANs-based clinical application in medical image generation, and cross-modality synthesis. The various framework of GANs which gained popularity in the interpretation of medical images, such as Deep Convolutional GAN (DCGAN), Laplacian GAN (LAPGAN), pix2pix, CycleGAN, and unsupervised image-to-image translation model (UNIT), continue to improve their performance by incorporating additional hybrid architecture, has been discussed. Further, some of the recent applications of these frameworks for image reconstruction, and synthesis, and future research directions in the area have been covered.Comment: 19 pages, 3 figures, 5 table

    The Use of Autoencoders for Discovering Patient Phenotypes

    Full text link
    We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital

    Generative Adversarial Network in Medical Imaging: A Review

    Full text link
    Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.Comment: 24 pages; v4; added missing references from before Jan 1st 2019; accepted to MedI

    Application of Deep Learning on Predicting Prognosis of Acute Myeloid Leukemia with Cytogenetics, Age, and Mutations

    Full text link
    We explore how Deep Learning (DL) can be utilized to predict prognosis of acute myeloid leukemia (AML). Out of TCGA (The Cancer Genome Atlas) database, 94 AML cases are used in this study. Input data include age, 10 common cytogenetic and 23 most common mutation results; output is the prognosis (diagnosis to death, DTD). In our DL network, autoencoders are stacked to form a hierarchical DL model from which raw data are compressed and organized and high-level features are extracted. The network is written in R language and is designed to predict prognosis of AML for a given case (DTD of more than or less than 730 days). The DL network achieves an excellent accuracy of 83% in predicting prognosis. As a proof-of-concept study, our preliminary results demonstrate a practical application of DL in future practice of prognostic prediction using next-gen sequencing (NGS) data.Comment: 11 pages, 1 table, 1 figure. arXiv admin note: substantial text overlap with arXiv:1801.0101

    Boosting Deep Learning Risk Prediction with Generative Adversarial Networks for Electronic Health Records

    Full text link
    The rapid growth of Electronic Health Records (EHRs), as well as the accompanied opportunities in Data-Driven Healthcare (DDH), has been attracting widespread interests and attentions. Recent progress in the design and applications of deep learning methods has shown promising results and is forcing massive changes in healthcare academia and industry, but most of these methods rely on massive labeled data. In this work, we propose a general deep learning framework which is able to boost risk prediction performance with limited EHR data. Our model takes a modified generative adversarial network namely ehrGAN, which can provide plausible labeled EHR data by mimicking real patient records, to augment the training dataset in a semi-supervised learning manner. We use this generative model together with a convolutional neural network (CNN) based prediction model to improve the onset prediction performance. Experiments on two real healthcare datasets demonstrate that our proposed framework produces realistic data samples and achieves significant improvements on classification tasks with the generated data over several stat-of-the-art baselines.Comment: To appear in ICDM 2017. This is the full version of paper with 8 page

    Large-scale Hierarchical Alignment for Data-driven Text Rewriting

    Full text link
    We propose a simple unsupervised method for extracting pseudo-parallel monolingual sentence pairs from comparable corpora representative of two different text styles, such as news articles and scientific papers. Our approach does not require a seed parallel corpus, but instead relies solely on hierarchical search over pre-trained embeddings of documents and sentences. We demonstrate the effectiveness of our method through automatic and extrinsic evaluation on text simplification from the normal to the Simple Wikipedia. We show that pseudo-parallel sentences extracted with our method not only supplement existing parallel data, but can even lead to competitive performance on their own.Comment: RANLP 201
    • …
    corecore