8 research outputs found
A Sentence Meaning Based Alignment Method for Parallel Text Corpora Preparation
Text alignment is crucial to the accuracy of Machine Translation (MT)
systems, some NLP tools or any other text processing tasks requiring bilingual
data. This research proposes a language independent sentence alignment approach
based on Polish (not position-sensitive language) to English experiments. This
alignment approach was developed on the TED Talks corpus, but can be used for
any text domain or language pair. The proposed approach implements various
heuristics for sentence recognition. Some of them value synonyms and semantic
text structure analysis as a part of additional information. Minimization of
data loss was ensured. The solution is compared to other sentence alignment
implementations. Also an improvement in MT system score with text processed
with described tool is shown.Comment: corpora filtration, text alignement, corpora improvement. arXiv admin
note: text overlap with arXiv:1509.0888
Noisy-parallel and comparable corpora filtering methodology for the extraction of bi-lingual equivalent data at sentence level
Text alignment and text quality are critical to the accuracy of Machine
Translation (MT) systems, some NLP tools, and any other text processing tasks
requiring bilingual data. This research proposes a language independent
bi-sentence filtering approach based on Polish (not a position-sensitive
language) to English experiments. This cleaning approach was developed on the
TED Talks corpus and also initially tested on the Wikipedia comparable corpus,
but it can be used for any text domain or language pair. The proposed approach
implements various heuristics for sentence comparison. Some of them leverage
synonyms and semantic and structural analysis of text as additional
information. Minimization of data loss was ensured. An improvement in MT system
score with text processed using the tool is discussed.Comment: arXiv admin note: text overlap with arXiv:1509.09093,
arXiv:1509.0888
Recommended from our members
NATURAL LANGUAGE PROCESSING BASED GENERATOR OF TESTING INSTRUMENTS
Natural Language Processing (NLP) is the field of study that focuses on the interactions between human language and computers. By “natural language” we mean a language that is used for everyday communication by humans. Different from programming languages, natural languages are hard to be defined with accurate rules. NLP is developing rapidly and it has been widely used in different industries. Technologies based on NLP are becoming increasingly widespread, for example, Siri or Alexa are intelligent personal assistants using NLP build in an algorithm to communicate with people. “Natural Language Processing Based Generator of Testing Instruments” is a stand-alone program that generates “plausible” multiple-choice selections by analyzing word sense disambiguation and calculating semantic similarity between two natural language entities. The core is Word Sense Disambiguation (WSD), WSD is identifying which sense of a word is used in a sentence when the word has multiple meanings. WSD is considered as an AI-hard problem. The project presents several algorithms to resolve WSD problem and compute semantic similarity, along with experimental results demonstrating their effectiveness