3 research outputs found
Training Language Models with Language Feedback at Scale
Pretrained language models often generate outputs that are not in line with
human preferences, such as harmful text or factually incorrect summaries.
Recent work approaches the above issues by learning from a simple form of human
feedback: comparisons between pairs of model-generated outputs. However,
comparison feedback only conveys limited information about human preferences.
In this paper, we introduce Imitation learning from Language Feedback (ILF), a
new approach that utilizes more informative language feedback. ILF consists of
three steps that are applied iteratively: first, conditioning the language
model on the input, an initial LM output, and feedback to generate refinements.
Second, selecting the refinement incorporating the most feedback. Third,
finetuning the language model to maximize the likelihood of the chosen
refinement given the input. We show theoretically that ILF can be viewed as
Bayesian Inference, similar to Reinforcement Learning from human feedback. We
evaluate ILF's effectiveness on a carefully-controlled toy task and a realistic
summarization task. Our experiments demonstrate that large language models
accurately incorporate feedback and that finetuning with ILF scales well with
the dataset size, even outperforming finetuning on human summaries. Learning
from both language and comparison feedback outperforms learning from each
alone, achieving human-level summarization performance
Improving Code Generation by Training with Natural Language Feedback
The potential for pre-trained large language models (LLMs) to use natural
language feedback at inference time has been an exciting recent development. We
build upon this observation by formalizing an algorithm for learning from
natural language feedback at training time instead, which we call Imitation
learning from Language Feedback (ILF). ILF requires only a small amount of
human-written feedback during training and does not require the same feedback
at test time, making it both user-friendly and sample-efficient. We further
show that ILF can be seen as a form of minimizing the KL divergence to the
ground truth distribution and demonstrate a proof-of-concept on a neural
program synthesis task. We use ILF to improve a Codegen-Mono 6.1B model's
pass@1 rate by 38% relative (and 10% absolute) on the Mostly Basic Python
Problems (MBPP) benchmark, outperforming both fine-tuning on MBPP and
fine-tuning on repaired programs written by humans. Overall, our results
suggest that learning from human-written natural language feedback is both more
effective and sample-efficient than training exclusively on demonstrations for
improving an LLM's performance on code generation tasks
Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark
Artificial agents have traditionally been trained to maximize reward, which
may incentivize power-seeking and deception, analogous to how next-token
prediction in language models (LMs) may incentivize toxicity. So do agents
naturally learn to be Machiavellian? And how do we measure these behaviors in
general-purpose models such as GPT-4? Towards answering these questions, we
introduce MACHIAVELLI, a benchmark of 134 Choose-Your-Own-Adventure games
containing over half a million rich, diverse scenarios that center on social
decision-making. Scenario labeling is automated with LMs, which are more
performant than human annotators. We mathematize dozens of harmful behaviors
and use our annotations to evaluate agents' tendencies to be power-seeking,
cause disutility, and commit ethical violations. We observe some tension
between maximizing reward and behaving ethically. To improve this trade-off, we
investigate LM-based methods to steer agents' towards less harmful behaviors.
Our results show that agents can both act competently and morally, so concrete
progress can currently be made in machine ethics--designing agents that are
Pareto improvements in both safety and capabilities.Comment: ICML 2023 Oral; 31 pages, 5 figure