16 research outputs found
Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation
Neural Machine Translation (NMT) currently exhibits biases such as producing translations that are too short and overgenerating frequent words, and shows poor robustness to copy noise in training data or domain shift. Recent work has tied these shortcomings to beam search – the de facto standard inference algorithm in NMT – and Eikema & Aziz (2020) propose to use Minimum Bayes Risk (MBR) decoding on unbiased samples instead. In this paper, we empirically investigate the properties of MBR decoding on a number of previously reported biases and failure cases of beam search. We find that MBR still exhibits a length and token frequency bias, owing to the MT metrics used as utility functions, but that MBR also increases robustness against copy noise in the training data and domain shift
Natural Language to Code Translation with Execution
Generative models of code, pretrained on large corpora of programs, have
shown great success in translating natural language to code (Chen et al., 2021;
Austin et al., 2021; Li et al., 2022, inter alia). While these models do not
explicitly incorporate program semantics (i.e., execution results) during
training, they are able to generate correct solutions for many problems.
However, choosing a single correct program from a generated set for each
problem remains challenging. In this work, we introduce execution result--based
minimum Bayes risk decoding (MBR-EXEC) for program selection and show that it
improves the few-shot performance of pretrained code models on
natural-language-to-code tasks. We select output programs from a generated
candidate set by marginalizing over program implementations that share the same
semantics. Because exact equivalence is intractable, we execute each program on
a small number of test inputs to approximate semantic equivalence. Across
datasets, execution or simulated execution significantly outperforms the
methods that do not involve program semantics. We find that MBR-EXEC
consistently improves over all execution-unaware selection methods, suggesting
it as an effective approach for natural language to code translation. We
open-source our code at github.com/facebookresearch/mbr-exec and data at
dl.fbaipublicfiles.com/mbr-exec/mbr-exec-release.zipComment: EMNLP 202