2,165 research outputs found
Graph Convolutional Networks for Text Classification
Text classification is an important and classical problem in natural language
processing. There have been a number of studies that applied convolutional
neural networks (convolution on regular grid, e.g., sequence) to
classification. However, only a limited number of studies have explored the
more flexible graph convolutional neural networks (convolution on non-grid,
e.g., arbitrary graph) for the task. In this work, we propose to use graph
convolutional networks for text classification. We build a single text graph
for a corpus based on word co-occurrence and document word relations, then
learn a Text Graph Convolutional Network (Text GCN) for the corpus. Our Text
GCN is initialized with one-hot representation for word and document, it then
jointly learns the embeddings for both words and documents, as supervised by
the known class labels for documents. Our experimental results on multiple
benchmark datasets demonstrate that a vanilla Text GCN without any external
word embeddings or knowledge outperforms state-of-the-art methods for text
classification. On the other hand, Text GCN also learns predictive word and
document embeddings. In addition, experimental results show that the
improvement of Text GCN over state-of-the-art comparison methods become more
prominent as we lower the percentage of training data, suggesting the
robustness of Text GCN to less training data in text classification.Comment: Accepted by 33rd AAAI Conference on Artificial Intelligence (AAAI
2019
Implementing a Portable Clinical NLP System with a Common Data Model - a Lisp Perspective
This paper presents a Lisp architecture for a portable NLP system, termed
LAPNLP, for processing clinical notes. LAPNLP integrates multiple standard,
customized and in-house developed NLP tools. Our system facilitates portability
across different institutions and data systems by incorporating an enriched
Common Data Model (CDM) to standardize necessary data elements. It utilizes
UMLS to perform domain adaptation when integrating generic domain NLP tools. It
also features stand-off annotations that are specified by positional reference
to the original document. We built an interval tree based search engine to
efficiently query and retrieve the stand-off annotations by specifying
positional requirements. We also developed a utility to convert an inline
annotation format to stand-off annotations to enable the reuse of clinical text
datasets with inline annotations. We experimented with our system on several
NLP facilitated tasks including computational phenotyping for lymphoma patients
and semantic relation extraction for clinical notes. These experiments
showcased the broader applicability and utility of LAPNLP.Comment: 6 pages, accepted by IEEE BIBM 2018 as regular pape
REflex: Flexible Framework for Relation Extraction in Multiple Domains
Systematic comparison of methods for relation extraction (RE) is difficult
because many experiments in the field are not described precisely enough to be
completely reproducible and many papers fail to report ablation studies that
would highlight the relative contributions of their various combined
techniques. In this work, we build a unifying framework for RE, applying this
on three highly used datasets (from the general, biomedical and clinical
domains) with the ability to be extendable to new datasets. By performing a
systematic exploration of modeling, pre-processing and training methodologies,
we find that choices of pre-processing are a large contributor performance and
that omission of such information can further hinder fair comparison. Other
insights from our exploration allow us to provide recommendations for future
research in this area.Comment: accepted by BioNLP 2019 at the Association of Computation Linguistics
201
- …