316 research outputs found
Lightweight diacritics restoration for V4 languages
Diacritics restoration became a ubiquitous task in the Latinalphabet-based English-dominated Internet language environment. In this article, we describe a small footprint 1D convolution-based approach, which works on character-level. The model even runs locally in a web browser, and surpasses the performance of similarly sized models. We evaluate our model on the languages of the Visegrád Group, with emphasis on Hungarian
Efficient Convolutional Neural Networks for Diacritic Restoration
Diacritic restoration has gained importance with the growing need for
machines to understand written texts. The task is typically modeled as a
sequence labeling problem and currently Bidirectional Long Short Term Memory
(BiLSTM) models provide state-of-the-art results. Recently, Bai et al. (2018)
show the advantages of Temporal Convolutional Neural Networks (TCN) over
Recurrent Neural Networks (RNN) for sequence modeling in terms of performance
and computational resources. As diacritic restoration benefits from both
previous as well as subsequent timesteps, we further apply and evaluate a
variant of TCN, Acausal TCN (A-TCN), which incorporates context from both
directions (previous and future) rather than strictly incorporating previous
context as in the case of TCN. A-TCN yields significant improvement over TCN
for diacritization in three different languages: Arabic, Yoruba, and
Vietnamese. Furthermore, A-TCN and BiLSTM have comparable performance, making
A-TCN an efficient alternative over BiLSTM since convolutions can be trained in
parallel. A-TCN is significantly faster than BiLSTM at inference time
(270%-334% improvement in the amount of text diacritized per minute).Comment: accepted in EMNLP 201
Take the Hint: Improving Arabic Diacritization with Partially-Diacritized Text
Automatic Arabic diacritization is useful in many applications, ranging from
reading support for language learners to accurate pronunciation predictor for
downstream tasks like speech synthesis. While most of the previous works
focused on models that operate on raw non-diacritized text, production systems
can gain accuracy by first letting humans partly annotate ambiguous words. In
this paper, we propose 2SDiac, a multi-source model that can effectively
support optional diacritics in input to inform all predictions. We also
introduce Guided Learning, a training scheme to leverage given diacritics in
input with different levels of random masking. We show that the provided hints
during test affect more output positions than those annotated. Moreover,
experiments on two common benchmarks show that our approach i) greatly
outperforms the baseline also when evaluated on non-diacritized text; and ii)
achieves state-of-the-art results while reducing the parameter count by over
60%.Comment: Arabic text diacritization, partially-diacritized text, Arabic
natural language processin
- …