Human-targeted metrics provide a compromise between human evaluation of machine translation, where high inter-annotator agreement is difficult to achieve, and fully automatic metrics,
such as BLEU or TER, that lack the validity of human assessment. Human-targeted translation
edit rate (HTER) is by far the most widely employed human-targeted metric in machine translation, commonly employed, for example, as a gold standard in evaluation of quality estimation.
Original experiments justifying the design of HTER, as opposed to other possible formulations,
were limited to a small sample of translations and a single language pair, however, and this motivates our re-evaluation of a range of human-targeted metrics on a substantially larger scale.
Results show significantly stronger correlation with human judgment for HBLEU over HTER
for two of the nine language pairs we include and no significant difference between correlations
achieved by HTER and HBLEU for the remaining language pairs. Finally, we evaluate a range of
quality estimation systems employing HTER and direct assessment (DA) of translation adequacy
as gold labels, resulting in a divergence in system rankings, and propose employment of DA for
future quality estimation evaluations