8 research outputs found

    Referenceless Quality Estimation for Natural Language Generation

    Full text link
    Traditional automatic evaluation measures for natural language generation (NLG) use costly human-authored references to estimate the quality of a system output. In this paper, we propose a referenceless quality estimation (QE) approach based on recurrent neural networks, which predicts a quality score for a NLG system output by comparing it to the source meaning representation only. Our method outperforms traditional metrics and a constant baseline in most respects; we also show that synthetic data helps to increase correlation results by 21% compared to the base system. Our results are comparable to results obtained in similar QE tasks despite the more challenging setting.Comment: Accepted as a regular paper to 1st Workshop on Learning to Generate Natural Language (LGNL), Sydney, 10 August 201

    Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge

    Full text link
    This paper provides a comprehensive analysis of the first shared task on End-to-End Natural Language Generation (NLG) and identifies avenues for future research based on the results. This shared task aimed to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena. Introducing novel automatic and human metrics, we compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures -- with the majority implementing sequence-to-sequence models (seq2seq) -- as well as systems based on grammatical rules and templates. Seq2seq-based systems have demonstrated a great potential for NLG in the challenge. We find that seq2seq systems generally score high in terms of word-overlap metrics and human evaluations of naturalness -- with the winning SLUG system (Juraska et al., 2018) being seq2seq-based. However, vanilla seq2seq models often fail to correctly express a given meaning representation if they lack a strong semantic control mechanism applied during decoding. Moreover, seq2seq models can be outperformed by hand-engineered systems in terms of overall quality, as well as complexity, length and diversity of outputs. This research has influenced, inspired and motivated a number of recent studies outwith the original competition, which we also summarise as part of this paper.Comment: Computer Speech and Language, final accepted manuscript (in press

    Neural natural language generation with unstructured contextual information

    Get PDF
    [EU] Lan honetan, hizkuntza naturalaren sorrera automatikoan informazio ez-egituratuaren esplotazioak izan dezakeen eragina aztertzen da. Bere helburu nagusia, sistema batek aurrez ikusi gabeko informazioa erabiliz testu koherentea sortzeko duen gaitasuna ebaluatzea da. Corpus berri bat ere aurkezten da, zeregin honetarako bereziki prestatutako Amazon Review corpusaren aldaera bat, produktuen deskribapenak input gisa erabiliz, erabiltzaileen iritziak automatikoki sortzeko erabiltzen dena. Hainbat deep learning ereduk eginkizun honetan lortzen dituzten emaitzak konparatzen dira eta informazio ez egituratua ustiatzeko gaitasun maila ezberdina dutela erakusten da.[EN] In this work, we present a novel task for automatic natural language generation, based on the exploitation of unstructured contextual information. The main aim of the task is to enable the evaluation of a system's capability to generate coherent text based on previously unseen and unstructured information. A new corpus was prepared specifically for the task, based on the Amazon Review corpus with product descriptions used as input for the generation of user reviews. Different deep learning generation models were implemented and compared under the proposed task, with significant differences in terms of their ability to exploit unstructured contextual information
    corecore