Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT

Abstract

Generative large language models (LLMs), e.g., ChatGPT, have demonstrated remarkable proficiency across several NLP tasks, such as machine translation, text summarization. Recent research (Kocmi and Federmann, 2023) has shown that utilizing ChatGPT for assessing the quality of machine translation (MT) achieves state-of-the-art performance at the system level but performs poorly at the segment level. To further improve the performance of LLMs on MT quality assessment, we conduct an investigation into several prompting methods, and propose a new prompting method called Error Analysis Prompting (EAPrompt) by combining Chain-of-Thoughts (Wei et al., 2022) and Error Analysis (Lu et al., 2022). Our results on WMT22 indicate that prompting LLMs like ChatGPT with error analysis can generate human-like MT evaluations at both the system and segment level. Additionally, we first discover some limitations of ChatGPT as an MT evaluator, such as changing the order of input may significantly influence the judgment when providing multiple translations in a single query. This work provides a preliminary experience of prompting LLMs as an evaluator to improve the reliability of translation evaluation metrics under the error analysis paradigm

    Similar works

    Full text

    thumbnail-image

    Available Versions