143 research outputs found
Arabic machine transliteration using an attention-based encoder-decoder model
Transliteration is the process of converting words from a given source language alphabet to a target language alphabet, in a way
that best preserves the phonetic and orthographic aspects of the transliterated words. Even though an important effort has been
made towards improving this process for many languages such as English, French and Chinese, little research work has been
accomplished with regard to the Arabic language. In this work, an attention-based encoder-decoder system is proposed for the
task of Machine Transliteration between the Arabic and English languages. Our experiments proved the efficiency of our proposal
approach in comparison to some previous research developed in this area
Arabic machine transliteration using an attention-based encoder-decoder model
Transliteration is the process of converting words from a given source language alphabet to a target language alphabet, in a way
that best preserves the phonetic and orthographic aspects of the transliterated words. Even though an important effort has been
made towards improving this process for many languages such as English, French and Chinese, little research work has been
accomplished with regard to the Arabic language. In this work, an attention-based encoder-decoder system is proposed for the
task of Machine Transliteration between the Arabic and English languages. Our experiments proved the efficiency of our proposal
approach in comparison to some previous research developed in this area
A Comprehensive Survey on Word Representation Models: From Classical to State-Of-The-Art Word Representation Language Models
Word representation has always been an important research area in the history
of natural language processing (NLP). Understanding such complex text data is
imperative, given that it is rich in information and can be used widely across
various applications. In this survey, we explore different word representation
models and its power of expression, from the classical to modern-day
state-of-the-art word representation language models (LMS). We describe a
variety of text representation methods, and model designs have blossomed in the
context of NLP, including SOTA LMs. These models can transform large volumes of
text into effective vector representations capturing the same semantic
information. Further, such representations can be utilized by various machine
learning (ML) algorithms for a variety of NLP related tasks. In the end, this
survey briefly discusses the commonly used ML and DL based classifiers,
evaluation metrics and the applications of these word embeddings in different
NLP tasks
- …