432 research outputs found

    Semi-Supervised Recurrent Neural Network for Adverse Drug Reaction Mention Extraction

    Full text link
    Social media is an useful platform to share health-related information due to its vast reach. This makes it a good candidate for public-health monitoring tasks, specifically for pharmacovigilance. We study the problem of extraction of Adverse-Drug-Reaction (ADR) mentions from social media, particularly from twitter. Medical information extraction from social media is challenging, mainly due to short and highly information nature of text, as compared to more technical and formal medical reports. Current methods in ADR mention extraction relies on supervised learning methods, which suffers from labeled data scarcity problem. The State-of-the-art method uses deep neural networks, specifically a class of Recurrent Neural Network (RNN) which are Long-Short-Term-Memory networks (LSTMs) \cite{hochreiter1997long}. Deep neural networks, due to their large number of free parameters relies heavily on large annotated corpora for learning the end task. But in real-world, it is hard to get large labeled data, mainly due to heavy cost associated with manual annotation. Towards this end, we propose a novel semi-supervised learning based RNN model, which can leverage unlabeled data also present in abundance on social media. Through experiments we demonstrate the effectiveness of our method, achieving state-of-the-art performance in ADR mention extraction.Comment: Accepted at DTMBIO workshop, CIKM 2017. To appear in BMC Bioinformatics. Pls cite that versio

    Effective Feature Representation for Clinical Text Concept Extraction

    Full text link
    Crucial information about the practice of healthcare is recorded only in free-form text, which creates an enormous opportunity for high-impact NLP. However, annotated healthcare datasets tend to be small and expensive to obtain, which raises the question of how to make maximally efficient uses of the available data. To this end, we develop an LSTM-CRF model for combining unsupervised word representations and hand-built feature representations derived from publicly available healthcare ontologies. We show that this combined model yields superior performance on five datasets of diverse kinds of healthcare text (clinical, social, scientific, commercial). Each involves the labeling of complex, multi-word spans that pick out different healthcare concepts. We also introduce a new labeled dataset for identifying the treatment relations between drugs and diseases

    Named Entity Recognition using Neural Networks for Clinical Notes

    Get PDF
    International audienceCurrently, the best performance for Named Entity Recognition in medical notes is obtained by systems based on neural networks. These supervised systems require precise features in order to learn well fitted models from training data, for the purpose of recognizing medical entities like medication and Adverse Drug Events (ADE). Because it is an important issue before training the neural network, we focus our work on building comprehensive word representations (the input of the neural network), using character-based word representations and word representations. The proposed representation improves the performance of the baseline LSTM. However, it does not reach the performances of the top performing contenders in the challenge for detecting medical entities from clinical notes.Actuellement, la meilleure performance pour la reconnaissance de l'entité nommée dans les notes médicales est obtenue par des systèmes basés sur des réseaux de neurones. Ces systèmes supervisés nécessitent des caractéristiques précises afin d'apprendre des modèles bien ajustés à partir des données de formation, dans le but de reconnaître les entités médicales comme les médicaments et les événements indésirables liés aux médicaments (EIM). Parce qu'il s'agit d'une question importante avant la formation du réseau neuronal, nous concentrons notre travail sur la construction de représentations complètes de mots (l'entrée du réseau neuronal), en utilisant des représentations de mots basés sur des caractères et des représentations de mots. La représentation proposée améliore la performance de la LSTM de référence. Cependant, il n'atteint pas les performances des concurrents les plus performants dans le challenge de détection d'entités médicales à partir de notes cliniques

    MR-GNN: Multi-Resolution and Dual Graph Neural Network for Predicting Structured Entity Interactions

    Full text link
    Predicting interactions between structured entities lies at the core of numerous tasks such as drug regimen and new material design. In recent years, graph neural networks have become attractive. They represent structured entities as graphs and then extract features from each individual graph using graph convolution operations. However, these methods have some limitations: i) their networks only extract features from a fix-sized subgraph structure (i.e., a fix-sized receptive field) of each node, and ignore features in substructures of different sizes, and ii) features are extracted by considering each entity independently, which may not effectively reflect the interaction between two entities. To resolve these problems, we present MR-GNN, an end-to-end graph neural network with the following features: i) it uses a multi-resolution based architecture to extract node features from different neighborhoods of each node, and, ii) it uses dual graph-state long short-term memory networks (L-STMs) to summarize local features of each graph and extracts the interaction features between pairwise graphs. Experiments conducted on real-world datasets show that MR-GNN improves the prediction of state-of-the-art methods.Comment: Accepted by IJCAI 201
    • …
    corecore