22,836 research outputs found
Boosting Named Entity Recognition with Neural Character Embeddings
Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes).Comment: 9 page
Expert Gate: Lifelong Learning with a Network of Experts
In this paper we introduce a model of lifelong learning, based on a Network
of Experts. New tasks / experts are learned and added to the model
sequentially, building on what was learned before. To ensure scalability of
this process,data from previous tasks cannot be stored and hence is not
available when learning a new task. A critical issue in such context, not
addressed in the literature so far, relates to the decision which expert to
deploy at test time. We introduce a set of gating autoencoders that learn a
representation for the task at hand, and, at test time, automatically forward
the test sample to the relevant expert. This also brings memory efficiency as
only one expert network has to be loaded into memory at any given time.
Further, the autoencoders inherently capture the relatedness of one task to
another, based on which the most relevant prior model to be used for training a
new expert, with finetuning or learning without-forgetting, can be selected. We
evaluate our method on image classification and video prediction problems.Comment: CVPR 2017 pape
- …