Pretrained language models have demonstrated extraordinary capabilities in
language generation. However, real-world tasks often require controlling the
distribution of generated text in order to mitigate bias, promote fairness, and
achieve personalization. Existing techniques for controlling the distribution
of generated text only work with quantified distributions, which require
pre-defined categories, proportions of the distribution, or an existing corpus
following the desired distributions. However, many important distributions,
such as personal preferences, are unquantified. In this work, we tackle the
problem of generating text following arbitrary distributions (quantified and
unquantified) by proposing Nano, a few-shot human-in-the-loop training
algorithm that continuously learns from human feedback. Nano achieves
state-of-the-art results on single topic/attribute as well as quantified
distribution control compared to previous works. We also show that Nano is able
to learn unquantified distributions, achieves personalization, and captures
differences between different individuals' personal preferences with high
sample efficiency.Comment: Accepted to ACL Findings 202