3,476 research outputs found

    Improving Statistical Machine Translation Accuracy Using Bilingual Lexicon Extraction with Paraphrases

    Get PDF
    Statistical machine translation (SMT) suffers from the accuracy problem that the translation pairs and their feature scores in the transla-tion model can be inaccurate. The accuracy problem is caused by the quality of the unsu-pervised methods used for translation model learning. Previous studies propose estimating comparable features for the translation pairs in the translation model from comparable cor-pora, to improve the accuracy of the transla-tion model. Comparable feature estimation is based on bilingual lexicon extraction (BLE) technology. However, BLE suffers from the data sparseness problem, which makes the comparable features inaccurate. In this paper, we propose using paraphrases to address this problem. Paraphrases are used to smooth the vectors used in comparable feature estimation with BLE. In this way, we improve the qual-ity of comparable features, which can improve the accuracy of the translation model thus im-prove SMT performance. Experiments con-ducted on Chinese-English phrase-based SMT (PBSMT) verify the effectiveness of our pro-posed method.

    Contextual bitext-derived paraphrases in automatic MT evaluation

    Get PDF
    In this paper we present a novel method for deriving paraphrases during automatic MT evaluation using only the source and reference texts, which are necessary for the evaluation, and word and phrase alignment software. Using target language paraphrases produced through word and phrase alignment a number of alternative reference sentences are constructed automatically for each candidate translation. The method produces lexical and lowlevel syntactic paraphrases that are relevant to the domain in hand, does not use external knowledge resources, and can be combined with a variety of automatic MT evaluation system

    Incorporating source-language paraphrases into phrase-based SMT with confusion networks

    Get PDF
    To increase the model coverage, sourcelanguage paraphrases have been utilized to boost SMT system performance. Previous work showed that word lattices constructed from paraphrases are able to reduce out-ofvocabulary words and to express inputs in different ways for better translation quality. However, such a word-lattice-based method suffers from two problems: 1) path duplications in word lattices decrease the capacities for potential paraphrases; 2) lattice decoding in SMT dramatically increases the search space and results in poor time efficiency. Therefore, in this paper, we adopt word confusion networks as the input structure to carry source-language paraphrase information. Similar to previous work, we use word lattices to build word confusion networks for merging of duplicated paths and faster decoding. Experiments are carried out on small-, medium- and large-scale English– Chinese translation tasks, and we show that compared with the word-lattice-based method, the decoding time on three tasks is reduced significantly (up to 79%) while comparable translation quality is obtained on the largescale task
    corecore