74 research outputs found
Noisy-parallel and comparable corpora filtering methodology for the extraction of bi-lingual equivalent data at sentence level
Text alignment and text quality are critical to the accuracy of Machine
Translation (MT) systems, some NLP tools, and any other text processing tasks
requiring bilingual data. This research proposes a language independent
bi-sentence filtering approach based on Polish (not a position-sensitive
language) to English experiments. This cleaning approach was developed on the
TED Talks corpus and also initially tested on the Wikipedia comparable corpus,
but it can be used for any text domain or language pair. The proposed approach
implements various heuristics for sentence comparison. Some of them leverage
synonyms and semantic and structural analysis of text as additional
information. Minimization of data loss was ensured. An improvement in MT system
score with text processed using the tool is discussed.Comment: arXiv admin note: text overlap with arXiv:1509.09093,
arXiv:1509.0888
Real-Time Statistical Speech Translation
This research investigates the Statistical Machine Translation approaches to
translate speech in real time automatically. Such systems can be used in a
pipeline with speech recognition and synthesis software in order to produce a
real-time voice communication system between foreigners. We obtained three main
data sets from spoken proceedings that represent three different types of human
speech. TED, Europarl, and OPUS parallel text corpora were used as the basis
for training of language models, for developmental tuning and testing of the
translation system. We also conducted experiments involving part of speech
tagging, compound splitting, linear language model interpolation, TrueCasing
and morphosyntactic analysis. We evaluated the effects of variety of data
preparations on the translation results using the BLEU, NIST, METEOR and TER
metrics and tried to give answer which metric is most suitable for PL-EN
language pair.Comment: machine translation, polish englis
Multilingual Models for Compositional Distributed Semantics
We present a novel technique for learning semantic representations, which
extends the distributional hypothesis to multilingual data and joint-space
embeddings. Our models leverage parallel data and learn to strongly align the
embeddings of semantically equivalent sentences, while maintaining sufficient
distance between those of dissimilar sentences. The models do not rely on word
alignments or any syntactic information and are successfully applied to a
number of diverse languages. We extend our approach to learn semantic
representations at the document level, too. We evaluate these models on two
cross-lingual document classification tasks, outperforming the prior state of
the art. Through qualitative analysis and the study of pivoting effects we
demonstrate that our representations are semantically plausible and can capture
semantic relationships across languages without parallel data.Comment: Proceedings of ACL 2014 (Long papers
- …