1 research outputs found

    Machine Translation for Multilingual Summary Content Evaluation

    No full text
    The multilingual summarization pilot task at TAC’11 opened a lot of problems we are facing when we try to evaluate summary quality in different languages. The additional language dimension greatly increases annotation costs. For the TAC pilot task English articles were first translated to other 6 languages, model summaries were written and submitted system summaries were evaluated. We start with the discussion whether ROUGE can produce system rankings similar to those received from manual summary scoring by measuring their correlation. We study then three ways of projecting summaries to a different language: projection through sentence alignment in the case of parallel corpora, simple summary translation and summarizing machine translated articles. Building such summaries gives opportunity to run additional experiments and reinforce the evaluation. Later, we investigate whether an evaluation based on machine translated models can perform close to an evaluation based on original models.
    corecore