122 research outputs found

    Findings of the E2E NLG Challenge

    Full text link
    This paper summarises the experimental setup and results of the first shared task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue systems. Recent end-to-end generation systems are promising since they reduce the need for data annotation. However, they are currently limited to small, delexicalised datasets. The E2E NLG shared task aims to assess whether these novel approaches can generate better-quality output by learning from a dataset containing higher lexical richness, syntactic complexity and diverse discourse phenomena. We compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures -- with the majority implementing sequence-to-sequence models (seq2seq) -- as well as systems based on grammatical rules and templates.Comment: Accepted to INLG 201

    Cost-based attribute selection for GRE (GRAPH-SC/GRAPH-FP)

    Get PDF
    In this paper we discuss several approaches to the problem of content determination for the generation of referring expressions (GRE) using the Graphbased framework of Krahmer et al. (2003). This work was carried out in the context of the First NLG Shared Task and Evaluation Challenge on Attribute Selection for Referring Expression Generation

    RankME: Reliable Human Ratings for Natural Language Generation

    Full text link
    Human evaluation for natural language generation (NLG) often suffers from inconsistent user ratings. While previous research tends to attribute this problem to individual user preferences, we show that the quality of human judgements can also be improved by experimental design. We present a novel rank-based magnitude estimation method (RankME), which combines the use of continuous scales and relative assessments. We show that RankME significantly improves the reliability and consistency of human ratings compared to traditional evaluation methods. In addition, we show that it is possible to evaluate NLG systems according to multiple, distinct criteria, which is important for error analysis. Finally, we demonstrate that RankME, in combination with Bayesian estimation of system quality, is a cost-effective alternative for ranking multiple NLG systems.Comment: Accepted to NAACL 2018 (The 2018 Conference of the North American Chapter of the Association for Computational Linguistics

    Preface

    Get PDF

    Empirical Methods in Natural Language Generation

    Get PDF

    Modality Choice for Generation of Referring Acts: Pointing versus Describing

    Get PDF
    The main aim of this paper is to challenge two commonly held assumptions regarding modality selection in the generation of referring acts: the assumption that non-verbal means of referring are secondary to verbal ones, and the assumption that there is a single strategy that speakers follow for generating referring acts. Our evidence is drawn from a corpus of task-oriented dialogues that was obtained through an observational study. We propose two alternative strategies for modality selection based on correlation data from the observational study. Speakers that follow the first strategy simply abstain from pointing. Speakers that follow the other strategy make the decision whether to point dependent on whether the intended referent is in focus and/or important. This decision precedes the selection of verbal means (i.e., words) for referring

    Cluster-based prediction of user ratings for stylistic surface realisation

    Get PDF

    A hearer-oriented evaluation of referring expression generation

    Get PDF
    This work is supported by a University of Aberdeen Sixth Century Studentship, and EPSRC grant EP/E011764/1.This paper discusses the evaluation of a Generation of Referring Expressions algorithm that takes structural ambiguity into account. We describe an ongoing study with human readers.peer-reviewe
    • …
    corecore