14,959 research outputs found

    Extraction of Information Related to Adverse Drug Events from Electronic Health Record Notes: Design of an End-to-End Model Based on Deep Learning

    Get PDF
    BACKGROUND: Pharmacovigilance and drug-safety surveillance are crucial for monitoring adverse drug events (ADEs), but the main ADE-reporting systems such as Food and Drug Administration Adverse Event Reporting System face challenges such as underreporting. Therefore, as complementary surveillance, data on ADEs are extracted from electronic health record (EHR) notes via natural language processing (NLP). As NLP develops, many up-to-date machine-learning techniques are introduced in this field, such as deep learning and multi-task learning (MTL). However, only a few studies have focused on employing such techniques to extract ADEs. OBJECTIVE: We aimed to design a deep learning model for extracting ADEs and related information such as medications and indications. Since extraction of ADE-related information includes two steps-named entity recognition and relation extraction-our second objective was to improve the deep learning model using multi-task learning between the two steps. METHODS: We employed the dataset from the Medication, Indication and Adverse Drug Events (MADE) 1.0 challenge to train and test our models. This dataset consists of 1089 EHR notes of cancer patients and includes 9 entity types such as Medication, Indication, and ADE and 7 types of relations between these entities. To extract information from the dataset, we proposed a deep-learning model that uses a bidirectional long short-term memory (BiLSTM) conditional random field network to recognize entities and a BiLSTM-Attention network to extract relations. To further improve the deep-learning model, we employed three typical MTL methods, namely, hard parameter sharing, parameter regularization, and task relation learning, to build three MTL models, called HardMTL, RegMTL, and LearnMTL, respectively. RESULTS: Since extraction of ADE-related information is a two-step task, the result of the second step (ie, relation extraction) was used to compare all models. We used microaveraged precision, recall, and F1 as evaluation metrics. Our deep learning model achieved state-of-the-art results (F1=65.9%), which is significantly higher than that (F1=61.7%) of the best system in the MADE1.0 challenge. HardMTL further improved the F1 by 0.8%, boosting the F1 to 66.7%, whereas RegMTL and LearnMTL failed to boost the performance. CONCLUSIONS: Deep learning models can significantly improve the performance of ADE-related information extraction. MTL may be effective for named entity recognition and relation extraction, but it depends on the methods, data, and other factors. Our results can facilitate research on ADE detection, NLP, and machine learning

    Linking social media, medical literature, and clinical notes using deep learning.

    Get PDF
    Researchers analyze data, information, and knowledge through many sources, formats, and methods. The dominant data format includes text and images. In the healthcare industry, professionals generate a large quantity of unstructured data. The complexity of this data and the lack of computational power causes delays in analysis. However, with emerging deep learning algorithms and access to computational powers such as graphics processing unit (GPU) and tensor processing units (TPUs), processing text and images is becoming more accessible. Deep learning algorithms achieve remarkable results in natural language processing (NLP) and computer vision. In this study, we focus on NLP in the healthcare industry and collect data not only from electronic medical records (EMRs) but also medical literature and social media. We propose a framework for linking social media, medical literature, and EMRs clinical notes using deep learning algorithms. Connecting data sources requires defining a link between them, and our key is finding concepts in the medical text. The National Library of Medicine (NLM) introduces a Unified Medical Language System (UMLS) and we use this system as the foundation of our own system. We recognize social media’s dynamic nature and apply supervised and semi-supervised methodologies to generate concepts. Named entity recognition (NER) allows efficient extraction of information, or entities, from medical literature, and we extend the model to process the EMRs’ clinical notes via transfer learning. The results include an integrated, end-to-end, web-based system solution that unifies social media, literature, and clinical notes, and improves access to medical knowledge for the public and experts

    Enhanced Neurologic Concept Recognition using a Named Entity Recognition Model based on Transformers

    Get PDF
    Although Deep Learning Has Been Applied to the Recognition of Diseases and Drugs in Electronic Health Records and the Biomedical Literature, Relatively Little Study Has Been Devoted to the Utility of Deep Learning for the Recognition of Signs and Symptoms. the Recognition of Signs and Symptoms is Critical to the Success of Deep Phenotyping and Precision Medicine. We Have Developed a Named Entity Recognition Model that Uses Deep Learning to Identify Text Spans Containing Neurological Signs and Symptoms and Then Maps These Text Spans to the Clinical Concepts of a Neuro-Ontology. We Compared a Model based on Convolutional Neural Networks to One based on Bidirectional Encoder Representation from Transformers. Models Were Evaluated for Accuracy of Text Span Identification on Three Text Corpora: Physician Notes from an Electronic Health Record, Case Histories from Neurologic Textbooks, and Clinical Synopses from an Online Database of Genetic Diseases. Both Models Performed Best on the Professionally-Written Clinical Synopses and Worst on the Physician-Written Clinical Notes. Both Models Performed Better When Signs and Symptoms Were Represented as Shorter Text Spans. Consistent with Prior Studies that Examined the Recognition of Diseases and Drugs, the Model based on Bidirectional Encoder Representations from Transformers Outperformed the Model based on Convolutional Neural Networks for Recognizing Signs and Symptoms. Recall for Signs and Symptoms Ranged from 59.5% to 82.0% and Precision Ranged from 61.7% to 80.4%. with Further Advances in NLP, Fully Automated Recognition of Signs and Symptoms in Electronic Health Records and the Medical Literature Should Be Feasible
    • …
    corecore