121 research outputs found
Language-Independent Tokenisation Rivals Language-Specific Tokenisation for Word Similarity Prediction.
Language-independent tokenisation (LIT) methods that do not require labelled
language resources or lexicons have recently gained popularity because of their
applicability in resource-poor languages. Moreover, they compactly represent a
language using a fixed size vocabulary and can efficiently handle unseen or
rare words. On the other hand, language-specific tokenisation (LST) methods
have a long and established history, and are developed using carefully created
lexicons and training resources. Unlike subtokens produced by LIT methods, LST
methods produce valid morphological subwords. Despite the contrasting
trade-offs between LIT vs. LST methods, their performance on downstream NLP
tasks remain unclear. In this paper, we empirically compare the two approaches
using semantic similarity measurement as an evaluation task across a diverse
set of languages. Our experimental results covering eight languages show that
LST consistently outperforms LIT when the vocabulary size is large, but LIT can
produce comparable or better results than LST in many languages with
comparatively smaller (i.e. less than 100K words) vocabulary sizes, encouraging
the use of LIT when language-specific resources are unavailable, incomplete or
a smaller model is required. Moreover, we find that smoothed inverse frequency
(SIF) to be an accurate method to create word embeddings from subword
embeddings for multilingual semantic similarity prediction tasks. Further
analysis of the nearest neighbours of tokens show that semantically and
syntactically related tokens are closely embedded in subword embedding spacesComment: To appear in the 12th Language Resources and Evaluation (LREC 2020)
Conferenc
A Case Study of Algorithms for Morphosyntactic Tagging of Polish Language
The paper presents an evaluation of several part-of-speech taggers, representing main tagging algorithms, applied to corpus of frequency dictionary of the contemporary Polish language. We report our results considering two tagging schemes: IPI PAN positional tagset and its simplified version. Tagging accuracy is calculated for different training sets and takes into account many subcategories (accuracy on known and unknown tokens, word segments, sentences etc.) The comparison of results with other inflecting and analytic languages is done. Performance aspects (time demands) of used tagging tools are also discussed
Modeling Target-Side Inflection in Neural Machine Translation
NMT systems have problems with large vocabulary sizes. Byte-pair encoding
(BPE) is a popular approach to solving this problem, but while BPE allows the
system to generate any target-side word, it does not enable effective
generalization over the rich vocabulary in morphologically rich languages with
strong inflectional phenomena. We introduce a simple approach to overcome this
problem by training a system to produce the lemma of a word and its
morphologically rich POS tag, which is then followed by a deterministic
generation step. We apply this strategy for English-Czech and English-German
translation scenarios, obtaining improvements in both settings. We furthermore
show that the improvement is not due to only adding explicit morphological
information.Comment: Accepted as a research paper at WMT17. (Updated version with
corrected references.
Data sparsity in highly inflected languages: the case of morphosyntactic tagging in Polish
In morphologically complex languages, many high-level tasks in natural language
processing rely on accurate morphosyntactic analyses of the input. However, in
light of the risk of error propagation in present-day pipeline architectures for basic
linguistic pre-processing, the state of the art for morphosyntactic tagging is still
not satisfactory. The main obstacle here is data sparsity inherent to natural lan-
guage in general and highly inflected languages in particular.
In this work, we investigate whether semi-supervised systems may alleviate the
data sparsity problem. Our approach uses word clusters obtained from large
amounts of unlabelled text in an unsupervised manner in order to provide a su-
pervised probabilistic tagger with morphologically informed features. Our evalua-
tions on a number of datasets for the Polish language suggest that this simple
technique improves tagging accuracy, especially with regard to out-of-vocabulary
words. This may prove useful to increase cross-domain performance of taggers,
and to alleviate the dependency on large amounts of supervised training data,
which is especially important from the perspective of less-resourced languages
- …