6,264 research outputs found
Joint morphological-lexical language modeling for processing morphologically rich languages with application to dialectal Arabic
Language modeling for an inflected language
such as Arabic poses new challenges for speech recognition and
machine translation due to its rich morphology. Rich morphology
results in large increases in out-of-vocabulary (OOV) rate and
poor language model parameter estimation in the absence of large
quantities of data. In this study, we present a joint
morphological-lexical language model (JMLLM) that takes
advantage of Arabic morphology. JMLLM combines
morphological segments with the underlying lexical items and
additional available information sources with regards to
morphological segments and lexical items in a single joint model.
Joint representation and modeling of morphological and lexical
items reduces the OOV rate and provides smooth probability
estimates while keeping the predictive power of whole words.
Speech recognition and machine translation experiments in
dialectal-Arabic show improvements over word and morpheme
based trigram language models. We also show that as the
tightness of integration between different information sources
increases, both speech recognition and machine translation
performances improve
Tuning syntactically enhanced word alignment for statistical machine translation
We introduce a syntactically enhanced word alignment model that is more flexible than state-of-the-art generative word
alignment models and can be tuned according to different end tasks. First of all, this model takes the advantages of
both unsupervised and supervised word alignment approaches by obtaining anchor alignments from unsupervised generative
models and seeding the anchor alignments into a supervised discriminative model. Second, this model offers the flexibility of tuning the alignment according to different
optimisation criteria. Our experiments show that using our word alignment in a Phrase-Based Statistical Machine Translation system yields a 5.38% relative increase
on IWSLT 2007 task in terms of BLEU score
A Survey of Word Reordering in Statistical Machine Translation: Computational Models and Language Phenomena
Word reordering is one of the most difficult aspects of statistical machine
translation (SMT), and an important factor of its quality and efficiency.
Despite the vast amount of research published to date, the interest of the
community in this problem has not decreased, and no single method appears to be
strongly dominant across language pairs. Instead, the choice of the optimal
approach for a new translation task still seems to be mostly driven by
empirical trials. To orientate the reader in this vast and complex research
area, we present a comprehensive survey of word reordering viewed as a
statistical modeling challenge and as a natural language phenomenon. The survey
describes in detail how word reordering is modeled within different
string-based and tree-based SMT frameworks and as a stand-alone task, including
systematic overviews of the literature in advanced reordering modeling. We then
question why some approaches are more successful than others in different
language pairs. We argue that, besides measuring the amount of reordering, it
is important to understand which kinds of reordering occur in a given language
pair. To this end, we conduct a qualitative analysis of word reordering
phenomena in a diverse sample of language pairs, based on a large collection of
linguistic knowledge. Empirical results in the SMT literature are shown to
support the hypothesis that a few linguistic facts can be very useful to
anticipate the reordering characteristics of a language pair and to select the
SMT framework that best suits them.Comment: 44 pages, to appear in Computational Linguistic
Learning When to Concentrate or Divert Attention: Self-Adaptive Attention Temperature for Neural Machine Translation
Most of the Neural Machine Translation (NMT) models are based on the
sequence-to-sequence (Seq2Seq) model with an encoder-decoder framework equipped
with the attention mechanism. However, the conventional attention mechanism
treats the decoding at each time step equally with the same matrix, which is
problematic since the softness of the attention for different types of words
(e.g. content words and function words) should differ. Therefore, we propose a
new model with a mechanism called Self-Adaptive Control of Temperature (SACT)
to control the softness of attention by means of an attention temperature.
Experimental results on the Chinese-English translation and English-Vietnamese
translation demonstrate that our model outperforms the baseline models, and the
analysis and the case study show that our model can attend to the most relevant
elements in the source-side contexts and generate the translation of high
quality.Comment: To appear in EMNLP 201
- …