1 research outputs found
Modelling Radiological Language with Bidirectional Long Short-Term Memory Networks
Motivated by the need to automate medical information extraction from
free-text radiological reports, we present a bi-directional long short-term
memory (BiLSTM) neural network architecture for modelling radiological
language. The model has been used to address two NLP tasks: medical
named-entity recognition (NER) and negation detection. We investigate whether
learning several types of word embeddings improves BiLSTM's performance on
those tasks. Using a large dataset of chest x-ray reports, we compare the
proposed model to a baseline dictionary-based NER system and a negation
detection system that leverages the hand-crafted rules of the NegEx algorithm
and the grammatical relations obtained from the Stanford Dependency Parser.
Compared to these more traditional rule-based systems, we argue that BiLSTM
offers a strong alternative for both our tasks.Comment: LOUHI 2016 conference proceeding