Natural language (NL) feedback contains rich information about the user
experience. Existing studies focus on an instance-level approach, where
feedback is used to refine specific examples, disregarding its system-wide
application. This paper proposes a general framework for unlocking the
system-level use of NL feedback. We show how to use feedback to formalize
system-level design decisions in a human-in-the-loop-process -- in order to
produce better models. In particular this is done through: (i) metric design
for tasks; and (ii) language model prompt design for refining model responses.
We conduct two case studies of this approach for improving search query
generation and dialog response generation, demonstrating the effectiveness of
the use of system-level feedback. We show the combination of system-level
feedback and instance-level feedback brings further gains, and that human
written instance-level feedback results in more grounded refinements than
GPT-3.5 written ones, underlying the importance of human feedback for building
systems.Comment: 12 pages, 13 tables, 2 figure