50,387 research outputs found

    NeuralREG: An end-to-end approach to referring expression generation

    Full text link
    Traditionally, Referring Expression Generation (REG) models first decide on the form and then on the content of references to discourse entities in text, typically relying on features such as salience and grammatical function. In this paper, we present a new approach (NeuralREG), relying on deep neural networks, which makes decisions about form and content in one go without explicit feature extraction. Using a delexicalized version of the WebNLG corpus, we show that the neural model substantially improves over two strong baselines. Data and models are publicly available.Comment: Accepted for presentation at ACL 201

    Natural Language Generation and Fuzzy Sets : An Exploratory Study on Geographical Referring Expression Generation

    Get PDF
    This work was supported by the Spanish Ministry for Economy and Competitiveness (grant TIN2014-56633-C3-1-R) and by the European Regional Development Fund (ERDF/FEDER) and the Galician Ministry of Education (grants GRC2014/030 and CN2012/151). Alejandro Ramos-Soto is supported by the Spanish Ministry for Economy and Competitiveness (FPI Fellowship Program) under grant BES-2012-051878.Postprin

    Domain transfer for deep natural language generation from abstract meaning representations

    Get PDF
    Stochastic natural language generation systems that are trained from labelled datasets are often domainspecific in their annotation and in their mapping from semantic input representations to lexical-syntactic outputs. As a result, learnt models fail to generalize across domains, heavily restricting their usability beyond single applications. In this article, we focus on the problem of domain adaptation for natural language generation. We show how linguistic knowledge from a source domain, for which labelled data is available, can be adapted to a target domain by reusing training data across domains. As a key to this, we propose to employ abstract meaning representations as a common semantic representation across domains. We model natural language generation as a long short-term memory recurrent neural network encoderdecoder, in which one recurrent neural network learns a latent representation of a semantic input, and a second recurrent neural network learns to decode it to a sequence of words. We show that the learnt representations can be transferred across domains and can be leveraged effectively to improve training on new unseen domains. Experiments in three different domains and with six datasets demonstrate that the lexical-syntactic constructions learnt in one domain can be transferred to new domains and achieve up to 75-100% of the performance of in-domain training. This is based on objective metrics such as BLEU and semantic error rate and a subjective human rating study. Training a policy from prior knowledge from a different domain is consistently better than pure in-domain training by up to 10%

    String Lessons for Higher-Spin Interactions

    Get PDF
    String Theory includes a plethora of higher-spin excitations, which clearly lie behind its most spectacular properties, but whose detailed behavior is largely unknown. Conversely, string interactions contain much useful information on higher-spin couplings, which can be very valuable in current attempts to characterize their systematics. We present a simplified form for the three-point (and four-point) amplitudes of the symmetric tensors belonging to the first Regge trajectory of the open bosonic string and relate them to local couplings and currents. These include the cases first discussed, from a field theory perspective, by Berends, Burgers and van Dam, and generalize their results in a suggestive fashion along lines recently explored by Boulanger, Metsaev and others. We also comment on the recovery of gauge symmetry in the low-tension limit, on the current-exchange amplitudes that can be built from these couplings and on the extension to mixed-symmetry states.Comment: 68 pages, LaTeX. Appendix on off-shell vertices and conserved (Bose and Fermi) currents added, typos corrected, references added. Final version to appear in Nucl. Phys.
    corecore