While large language models have proven effective in a huge range of
downstream applications, they often generate text that is problematic or lacks
a desired attribute. In this paper, we introduce Reward-Augmented Decoding
(RAD), a text generation procedure that uses a small unidirectional reward
model to encourage a language model to generate text that has certain
properties. Specifically, RAD uses the reward model to score generations as
they are produced and rescales sampling probabilities to favor high-reward
tokens. By using a unidirectional reward model, RAD can cache activations from
prior generation steps to decrease computational overhead. Through experiments
on generating non-toxic and sentiment-controlled text, we demonstrate that RAD
performs best among methods that change only the generation procedure and
matches the performance of state-of-the-art methods that involve re-training
the language model. We further validate that RAD is effective on very large
language models while incurring a minimal computational overhead