13,764 research outputs found
The interaction of knowledge sources in word sense disambiguation
Word sense disambiguation (WSD) is a computational linguistics task likely to benefit from the tradition of combining different knowledge sources in artificial in telligence research. An important step in the exploration of this hypothesis is to determine which linguistic knowledge sources are most useful and whether their combination leads to improved results.
We present a sense tagger which uses several knowledge sources. Tested accuracy exceeds 94% on our evaluation corpus.Our system attempts to disambiguate all content words in running text rather than limiting itself to treating a restricted vocabulary of words. It is argued that this approach is more likely to assist the creation of practical systems
Multilingual Language Processing From Bytes
We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads
text as bytes and outputs span annotations of the form [start, length, label]
where start positions, lengths, and labels are separate entries in our
vocabulary. Because we operate directly on unicode bytes rather than
language-specific words or characters, we can analyze text in many languages
with a single model. Due to the small vocabulary size, these multilingual
models are very compact, but produce results similar to or better than the
state-of- the-art in Part-of-Speech tagging and Named Entity Recognition that
use only the provided training datasets (no external data sources). Our models
are learning "from scratch" in that they do not rely on any elements of the
standard pipeline in Natural Language Processing (including tokenization), and
thus can run in standalone fashion on raw text
Spanish named entity recognition in the biomedical domain
Named Entity Recognition in the clinical domain and in languages different from English has the difficulty of the absence of complete dictionaries, the informality of texts, the polysemy of terms, the lack of accordance in the boundaries of an entity, the scarcity of corpora and of other resources available. We present a Named Entity Recognition method for poorly resourced languages. The method was tested with Spanish radiology reports and compared with a conditional random fields system.Peer ReviewedPostprint (author's final draft
Joint Entity Extraction and Assertion Detection for Clinical Text
Negative medical findings are prevalent in clinical reports, yet
discriminating them from positive findings remains a challenging task for
information extraction. Most of the existing systems treat this task as a
pipeline of two separate tasks, i.e., named entity recognition (NER) and
rule-based negation detection. We consider this as a multi-task problem and
present a novel end-to-end neural model to jointly extract entities and
negations. We extend a standard hierarchical encoder-decoder NER model and
first adopt a shared encoder followed by separate decoders for the two tasks.
This architecture performs considerably better than the previous rule-based and
machine learning-based systems. To overcome the problem of increased parameter
size especially for low-resource settings, we propose the Conditional Softmax
Shared Decoder architecture which achieves state-of-art results for NER and
negation detection on the 2010 i2b2/VA challenge dataset and a proprietary
de-identified clinical dataset.Comment: Accepted at the 57th Annual Meeting of the Association for
Computational Linguistics (ACL 2019
- …