26 research outputs found
Exploiting Arabic Diacritization for High Quality Automatic Annotation
International audienceWe present a novel technique for Arabic morphological annotation. The technique utilizes diacritization to produce morphological annotations of quality comparable to human annotators. Although Arabic text is generally written without diacritics, diacritization is already available for large corpora of Arabic text in several genres. Furthermore, diacritization can be generated at a low cost for new text as it does not require specialized training beyond what educated Arabic typists know. The basic approach is to enrich the input to a state-of-the-art Arabic morphological analyzer with word diacritics (full or partial) to enhance its performance. When applied to fully diacritized text, our approach produces annotations with an accuracy of over 97% on lemma, part-of-speech, and tokenization combined
Homograph Disambiguation Through Selective Diacritic Restoration
Lexical ambiguity, a challenging phenomenon in all natural languages, is
particularly prevalent for languages with diacritics that tend to be omitted in
writing, such as Arabic. Omitting diacritics leads to an increase in the number
of homographs: different words with the same spelling. Diacritic restoration
could theoretically help disambiguate these words, but in practice, the
increase in overall sparsity leads to performance degradation in NLP
applications. In this paper, we propose approaches for automatically marking a
subset of words for diacritic restoration, which leads to selective homograph
disambiguation. Compared to full or no diacritic restoration, these approaches
yield selectively-diacritized datasets that balance sparsity and lexical
disambiguation. We evaluate the various selection strategies extrinsically on
several downstream applications: neural machine translation, part-of-speech
tagging, and semantic textual similarity. Our experiments on Arabic show
promising results, where our devised strategies on selective diacritization
lead to a more balanced and consistent performance in downstream applications.Comment: accepted in WANLP 201
Diacritic Recognition Performance in Arabic ASR
We present an analysis of diacritic recognition performance in Arabic
Automatic Speech Recognition (ASR) systems. As most existing Arabic speech
corpora do not contain all diacritical marks, which represent short vowels and
other phonetic information in Arabic script, current state-of-the-art ASR
models do not produce full diacritization in their output. Automatic text-based
diacritization has previously been employed both as a pre-processing step to
train diacritized ASR, or as a post-processing step to diacritize the resulting
ASR hypotheses. It is generally believed that input diacritization degrades ASR
performance, but no systematic evaluation of ASR diacritization performance,
independent of ASR performance, has been conducted to date. In this paper, we
attempt to experimentally clarify whether input diacritiztation indeed degrades
ASR quality, and to compare the diacritic recognition performance against
text-based diacritization as a post-processing step. We start with pre-trained
Arabic ASR models and fine-tune them on transcribed speech data with different
diacritization conditions: manual, automatic, and no diacritization. We isolate
diacritic recognition performance from the overall ASR performance using
coverage and precision metrics. We find that ASR diacritization significantly
outperforms text-based diacritization in post-processing, particularly when the
ASR model is fine-tuned with manually diacritized transcripts
Morphological, syntactic and diacritics rules for automatic diacritization of Arabic sentences
AbstractThe diacritical marks of Arabic language are characters other than letters and are in the majority of cases absent from Arab writings. This paper presents a hybrid system for automatic diacritization of Arabic sentences combining linguistic rules and statistical treatments. The used approach is based on four stages. The first phase consists of a morphological analysis using the second version of the morphological analyzer Alkhalil Morpho Sys. Morphosyntactic outputs from this step are used in the second phase to eliminate invalid word transitions according to the syntactic rules. Then, the system used in the third stage is a discrete hidden Markov model and Viterbi algorithm to determine the most probable diacritized sentence. The unseen transitions in the training corpus are processed using smoothing techniques. Finally, the last step deals with words not analyzed by Alkhalil analyzer, for which we use statistical treatments based on the letters. The word error rate of our system is around 2.58% if we ignore the diacritic of the last letter of the word and around 6.28% when this diacritic is taken into account
Analytical phonetic study of three areas of Al-Farahidiy's legacy
It is the purpose of the present thesis to present an
analytical phonetic study of three areas of alFarahidiy1s linguistic
legacy in a general phonetic perspective in such a way as to
preserve a proper balance between the analytical and historical
sides of our subject, Phonetics. Only three areas have been
decided upon due to the fact that a comprehensive, analytical study
of al Farahidiy's linguistic legacy would be a lifetime-work. The thesis is presented in four major sections: an
introduction and an analytical phonetic study of three areas. The
introduction deals in general terms with alFarahidiy1s biography and
his contributions to fields pertinent to Phonetics, though they are
not primarily phonetic. The three areas deal respectively with his
approach to verse structure, the time-substratum underlying his
system, and his restoration of the principles which lie hid
underneath what I have called (since no other term exists) the phoniconic
symbols1 of the East Mediterranean scripts. Each
analytical section includes either a theoretical, phonetic discuss¬
ion against which alFarahidiy's contribution is projected in
terms of its relation to the general phonetic spectrum, or an
empirical evidence in support of a hypothesis discovered in the construction of his prosodic system. Towards this end, the
first area, following a more or less Stetsonian line, includes a
theoretical view of the articulatory actualization of the respir¬
atory potential1 and rhythmicality in Arabic; the second section
is focused on the empirical authentication of the time-units which
underlie his prosodic system, whilst the third section starts with
an analytico-phonetic approach to the East Mediterranean scripts.
The thesis is concluded with a general bibliography of
works that have been cited or consulted, with a special section
allocated to works by or about alFarahidiy. The author is convinced that the soundest basis for an
understanding of certain phonological phenomena (particularly, the
superimposed stretches, quantity and rhythm) of a living language
with a long history behind it, would be an illumination of the path
of development it has pursued. Such a path, in normal conditions,
is provided by phoneticians or writers on phonetics. It is also
the conviction of the author that for an enlightened attitude
towards the history of phonetics, especially in olden times when
phonetics was a practice not a discipline, an analytical, phonetic
approach to the pertinent writing system constitutes a proper
springboard. For this reason, equal attention has been paid to
the development of the 'pure' iconic and phoniconic writing
systems in Mesopotamia and the East Mediterranean in the prelude to alFarahidiy's restoration of certain scriptological, phoniconic
principles which lie in the background of the Ugaritic script in
his prosodization of the Arabic script
Multi-dialect Arabic broadcast speech recognition
Dialectal Arabic speech research suffers from the lack of labelled resources and
standardised orthography. There are three main challenges in dialectal Arabic
speech recognition: (i) finding labelled dialectal Arabic speech data, (ii) training
robust dialectal speech recognition models from limited labelled data and (iii)
evaluating speech recognition for dialects with no orthographic rules. This thesis
is concerned with the following three contributions:
Arabic Dialect Identification: We are mainly dealing with Arabic speech
without prior knowledge of the spoken dialect. Arabic dialects could be sufficiently
diverse to the extent that one can argue that they are different languages
rather than dialects of the same language. We have two contributions:
First, we use crowdsourcing to annotate a multi-dialectal speech corpus collected
from Al Jazeera TV channel. We obtained utterance level dialect labels for 57
hours of high-quality consisting of four major varieties of dialectal Arabic (DA),
comprised of Egyptian, Levantine, Gulf or Arabic peninsula, North African or
Moroccan from almost 1,000 hours. Second, we build an Arabic dialect identification
(ADI) system. We explored two main groups of features, namely acoustic
features and linguistic features. For the linguistic features, we look at a wide
range of features, addressing words, characters and phonemes. With respect to
acoustic features, we look at raw features such as mel-frequency cepstral coefficients
combined with shifted delta cepstra (MFCC-SDC), bottleneck features and
the i-vector as a latent variable. We studied both generative and discriminative
classifiers, in addition to deep learning approaches, namely deep neural network
(DNN) and convolutional neural network (CNN). In our work, we propose Arabic
as a five class dialect challenge comprising of the previously mentioned four
dialects as well as modern standard Arabic.
Arabic Speech Recognition: We introduce our effort in building Arabic automatic
speech recognition (ASR) and we create an open research community
to advance it. This section has two main goals: First, creating a framework for
Arabic ASR that is publicly available for research. We address our effort in building
two multi-genre broadcast (MGB) challenges. MGB-2 focuses on broadcast
news using more than 1,200 hours of speech and 130M words of text collected
from the broadcast domain. MGB-3, however, focuses on dialectal multi-genre
data with limited non-orthographic speech collected from YouTube, with special
attention paid to transfer learning. Second, building a robust Arabic ASR system
and reporting a competitive word error rate (WER) to use it as a potential
benchmark to advance the state of the art in Arabic ASR. Our overall system is
a combination of five acoustic models (AM): unidirectional long short term memory
(LSTM), bidirectional LSTM (BLSTM), time delay neural network (TDNN),
TDNN layers along with LSTM layers (TDNN-LSTM) and finally TDNN layers
followed by BLSTM layers (TDNN-BLSTM). The AM is trained using purely
sequence trained neural networks lattice-free maximum mutual information (LFMMI).
The generated lattices are rescored using a four-gram language model
(LM) and a recurrent neural network with maximum entropy (RNNME) LM.
Our official WER is 13%, which has the lowest WER reported on this task.
Evaluation: The third part of the thesis addresses our effort in evaluating dialectal
speech with no orthographic rules. Our methods learn from multiple
transcribers and align the speech hypothesis to overcome the non-orthographic
aspects. Our multi-reference WER (MR-WER) approach is similar to the BLEU
score used in machine translation (MT). We have also automated this process
by learning different spelling variants from Twitter data. We mine automatically
from a huge collection of tweets in an unsupervised fashion to build more than
11M n-to-m lexical pairs, and we propose a new evaluation metric: dialectal
WER (WERd). Finally, we tried to estimate the word error rate (e-WER) with
no reference transcription using decoding and language features. We show that
our word error rate estimation is robust for many scenarios with and without the
decoding features
A Computational Lexicon and Representational Model for Arabic Multiword Expressions
The phenomenon of multiword expressions (MWEs) is increasingly recognised as a serious and challenging issue that has attracted the attention of researchers in various language-related disciplines. Research in these many areas has emphasised the primary role of MWEs in the process of analysing and understanding language, particularly in the computational treatment of natural languages. Ignoring MWE knowledge in any NLP system reduces the possibility of achieving high precision outputs. However, despite the enormous wealth of MWE research and language resources available for English and some other languages, research on Arabic MWEs (AMWEs) still faces multiple challenges, particularly in key computational tasks such as extraction, identification, evaluation, language resource building, and lexical representations.
This research aims to remedy this deficiency by extending knowledge of AMWEs and making noteworthy contributions to the existing literature in three related research areas on the way towards building a computational lexicon of AMWEs. First, this study develops a general understanding of AMWEs by establishing a detailed conceptual framework that includes a description of an adopted AMWE concept and its distinctive properties at multiple linguistic levels. Second, in the use of AMWE extraction and discovery tasks, the study employs a hybrid approach that combines knowledge-based and data-driven computational methods for discovering multiple types of AMWEs. Third, this thesis presents a representative system for AMWEs which consists of multilayer encoding of extensive linguistic descriptions.
This project also paves the way for further in-depth AMWE-aware studies in NLP and linguistics to gain new insights into this complicated phenomenon in standard Arabic. The implications of this research are related to the vital role of the AMWE lexicon, as a new lexical resource, in the improvement of various ANLP tasks and the potential opportunities this lexicon provides for linguists to analyse and explore AMWE phenomena
Recommended from our members
Machine Translation of Arabic Dialects
This thesis discusses different approaches to machine translation (MT) from Dialectal Arabic (DA) to English. These approaches handle the varying stages of Arabic dialects in terms of types of available resources and amounts of training data. The overall theme of this work revolves around building dialectal resources and MT systems or enriching existing ones using the currently available resources (dialectal or standard) in order to quickly and cheaply scale to more dialects without the need to spend years and millions of dollars to create such resources for every dialect.
Unlike Modern Standard Arabic (MSA), DA-English parallel corpora is scarcely available for few dialects only. Dialects differ from each other and from MSA in orthography, morphology, phonology, and to some lesser degree syntax. This means that combining all available parallel data, from dialects and MSA, to train DA-to-English statistical machine translation (SMT) systems might not provide the desired results. Similarly, translating dialectal sentences with an SMT system trained on that dialect only is also challenging due to different factors that affect the sentence word choices against that of the SMT training data. Such factors include the level of dialectness (e.g., code switching to MSA versus dialectal training data), topic (sports versus politics), genre (tweets versus newspaper), script (Arabizi versus Arabic), and timespan of test against training. The work we present utilizes any available Arabic resource such as a preprocessing tool or a parallel corpus, whether MSA or DA, to improve DA-to-English translation and expand to more dialects and sub-dialects.
The majority of Arabic dialects have no parallel data to English or to any other foreign language. They also have no preprocessing tools such as normalizers, morphological analyzers, or tokenizers. For such dialects, we present an MSA-pivoting approach where DA sentences are translated to MSA first, then the MSA output is translated to English using the wealth of MSA-English parallel data. Since there is virtually no DA-MSA parallel data to train an SMT system, we build a rule-based DA-to-MSA MT system, ELISSA, that uses morpho-syntactic translation rules along with dialect identification and language modeling components. We also present a rule-based approach to quickly and cheaply build a dialectal morphological analyzer, ADAM, which provides ELISSA with dialectal word analyses.
Other Arabic dialects have a relatively small-sized DA-English parallel data amounting to a few million words on the DA side. Some of these dialects have dialect-dependent preprocessing tools that can be used to prepare the DA data for SMT systems. We present techniques to generate synthetic parallel data from the available DA-English and MSA- English data. We use this synthetic data to build statistical and hybrid versions of ELISSA as well as improve our rule-based ELISSA-based MSA-pivoting approach. We evaluate our best MSA-pivoting MT pipeline against three direct SMT baselines trained on these three parallel corpora: DA-English data only, MSA-English data only, and the combination of DA-English and MSA-English data. Furthermore, we leverage the use of these four MT systems (the three baselines along with our MSA-pivoting system) in two system combination approaches that benefit from their strengths while avoiding their weaknesses.
Finally, we propose an approach to model dialects from monolingual data and limited DA-English parallel data without the need for any language-dependent preprocessing tools. We learn DA preprocessing rules using word embedding and expectation maximization. We test this approach by building a morphological segmentation system and we evaluate its performance on MT against the state-of-the-art dialectal tokenization tool