22 research outputs found
Universal Lemmatizer: A sequence-to-sequence model for lemmatizing Universal Dependencies treebanks
In this paper, we present a novel lemmatization method based on a sequence-to-sequence neural network architecture and morphosyntactic context representation. In the proposed method, our context-sensitive lemmatizer generates the lemma one character at a time based on the surface form characters and its morphosyntactic features obtained from a morphological tagger. We argue that a sliding window context representation suffers from sparseness, while in majority of cases the morphosyntactic features of a word bring enough information to resolve lemma ambiguities while keeping the context representation dense and more practical for machine learning systems. Additionally, we study two different data augmentation methods utilizing autoencoder training and morphological transducers especially beneficial for low-resource languages. We evaluate our lemmatizer on 52 different languages and 76 different treebanks, showing that our system outperforms all latest baseline systems. Compared to the best overall baseline, UDPipe Future, our system outperforms it on 62 out of 76 treebanks reducing errors on average by 19% relative. The lemmatizer together with all trained models is made available as a part of the Turku-neural-parsing-pipeline under the Apache 2.0 license.</p
Enhancing Sequence-to-Sequence Neural Lemmatization with External Resources
We propose a novel hybrid approach to lemmatization that enhances the seq2seq
neural model with additional lemmas extracted from an external lexicon or a
rule-based system. During training, the enhanced lemmatizer learns both to
generate lemmas via a sequential decoder and copy the lemma characters from the
external candidates supplied during run-time. Our lemmatizer enhanced with
candidates extracted from the Apertium morphological analyzer achieves
statistically significant improvements compared to baseline models not
utilizing additional lemma information, achieves an average accuracy of 97.25%
on a set of 23 UD languages, which is 0.55% higher than obtained with the
Stanford Stanza model on the same set of languages. We also compare with other
methods of integrating external data into lemmatization and show that our
enhanced system performs considerably better than a simple lexicon extension
method based on the Stanza system, and it achieves complementary improvements
w.r.t. the data augmentation method
Imitation Learning for Neural Morphological String Transduction
We employ imitation learning to train a neural transition-based string
transducer for morphological tasks such as inflection generation and
lemmatization. Previous approaches to training this type of model either rely
on an external character aligner for the production of gold action sequences,
which results in a suboptimal model due to the unwarranted dependence on a
single gold action sequence despite spurious ambiguity, or require warm
starting with an MLE model. Our approach only requires a simple expert policy,
eliminating the need for a character aligner or warm start. It also addresses
familiar MLE training biases and leads to strong and state-of-the-art
performance on several benchmarks.Comment: 6 pages; accepted to EMNLP 201