Large language models (LLMs) have demonstrated impressive capabilities in
general scenarios, exhibiting a level of aptitude that approaches, in some
aspects even surpasses, human-level intelligence. Among their numerous skills,
the translation abilities of LLMs have received considerable attention. In
contrast to traditional machine translation that focuses solely on
source-target mapping, LLM-based translation can potentially mimic the human
translation process that takes many preparatory steps to ensure high-quality
translation. This work aims to explore this possibility by proposing the MAPS
framework, which stands for Multi-Aspect Prompting and Selection. Specifically,
we enable LLMs to first analyze the given source text and extract three aspects
of translation-related knowledge: keywords, topics and relevant demonstrations
to guide the translation process. To filter out the noisy and unhelpful
knowledge, we employ a selection mechanism based on quality estimation.
Experiments suggest that MAPS brings significant and consistent improvements
over text-davinci-003 and Alpaca on eight translation directions from the
latest WMT22 test sets. Our further analysis shows that the extracted knowledge
is critical in resolving up to 59% of hallucination mistakes in translation.
Code is available at https://github.com/zwhe99/MAPS-mt.Comment: V2: add more experiments and case studies; polish writin