56 research outputs found
BLEU is Not Suitable for the Evaluation of Text Simplification
BLEU is widely considered to be an informative metric for text-to-text
generation, including Text Simplification (TS). TS includes both lexical and
structural aspects. In this paper we show that BLEU is not suitable for the
evaluation of sentence splitting, the major structural simplification
operation. We manually compiled a sentence splitting gold standard corpus
containing multiple structural paraphrases, and performed a correlation
analysis with human judgments. We find low or no correlation between BLEU and
the grammaticality and meaning preservation parameters where sentence splitting
is involved. Moreover, BLEU often negatively correlates with simplicity,
essentially penalizing simpler sentences.Comment: Accepted to EMNLP 2018 (Short papers
Automatic Text Simplification for People with Intellectual Disabilities
Text simplification (TS) aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning. Current automatic TS techniques are limited to either lexical-level applications or manually defining a large amount of rules. In this paper, we propose to simplify text from both level of lexicons and sentences. We conduct preliminary experiments to find that our approach shows promising results
Classifier-Based Text Simplification for Improved Machine Translation
Machine Translation is one of the research fields of Computational
Linguistics. The objective of many MT Researchers is to develop an MT System
that produce good quality and high accuracy output translations and which also
covers maximum language pairs. As internet and Globalization is increasing day
by day, we need a way that improves the quality of translation. For this
reason, we have developed a Classifier based Text Simplification Model for
English-Hindi Machine Translation Systems. We have used support vector machines
and Na\"ive Bayes Classifier to develop this model. We have also evaluated the
performance of these classifiers.Comment: In Proceedings of International Conference on Advances in Computer
Engineering and Applications 201
Query and Output: Generating Words by Querying Distributed Word Representations for Paraphrase Generation
Most recent approaches use the sequence-to-sequence model for paraphrase
generation. The existing sequence-to-sequence model tends to memorize the words
and the patterns in the training dataset instead of learning the meaning of the
words. Therefore, the generated sentences are often grammatically correct but
semantically improper. In this work, we introduce a novel model based on the
encoder-decoder framework, called Word Embedding Attention Network (WEAN). Our
proposed model generates the words by querying distributed word representations
(i.e. neural word embeddings), hoping to capturing the meaning of the according
words. Following previous work, we evaluate our model on two
paraphrase-oriented tasks, namely text simplification and short text
abstractive summarization. Experimental results show that our model outperforms
the sequence-to-sequence baseline by the BLEU score of 6.3 and 5.5 on two
English text simplification datasets, and the ROUGE-2 F1 score of 5.7 on a
Chinese summarization dataset. Moreover, our model achieves state-of-the-art
performances on these three benchmark datasets.Comment: arXiv admin note: text overlap with arXiv:1710.0231
- …