1 research outputs found
Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References
The aim of this paper is to mitigate the shortcomings of automatic evaluation
of open-domain dialog systems through multi-reference evaluation. Existing
metrics have been shown to correlate poorly with human judgement, particularly
in open-domain dialog. One alternative is to collect human annotations for
evaluation, which can be expensive and time consuming. To demonstrate the
effectiveness of multi-reference evaluation, we augment the test set of
DailyDialog with multiple references. A series of experiments show that the use
of multiple references results in improved correlation between several
automatic metrics and human judgement for both the quality and the diversity of
system output.Comment: SIGDIAL 201