207,381 research outputs found

    Domain transfer for deep natural language generation from abstract meaning representations

    Get PDF
    Stochastic natural language generation systems that are trained from labelled datasets are often domainspecific in their annotation and in their mapping from semantic input representations to lexical-syntactic outputs. As a result, learnt models fail to generalize across domains, heavily restricting their usability beyond single applications. In this article, we focus on the problem of domain adaptation for natural language generation. We show how linguistic knowledge from a source domain, for which labelled data is available, can be adapted to a target domain by reusing training data across domains. As a key to this, we propose to employ abstract meaning representations as a common semantic representation across domains. We model natural language generation as a long short-term memory recurrent neural network encoderdecoder, in which one recurrent neural network learns a latent representation of a semantic input, and a second recurrent neural network learns to decode it to a sequence of words. We show that the learnt representations can be transferred across domains and can be leveraged effectively to improve training on new unseen domains. Experiments in three different domains and with six datasets demonstrate that the lexical-syntactic constructions learnt in one domain can be transferred to new domains and achieve up to 75-100% of the performance of in-domain training. This is based on objective metrics such as BLEU and semantic error rate and a subjective human rating study. Training a policy from prior knowledge from a different domain is consistently better than pure in-domain training by up to 10%

    Information and Experience in Metaphor: A Perspective From Computer Analysis

    Get PDF
    Novel linguistic metaphor can be seen as the assignment of attributes to a topic through a vehicle belonging to another domain. The experience evoked by the vehicle is a significant aspect of the meaning of the metaphor, especially for abstract metaphor, which involves more than mere physical similarity. In this article I indicate, through description of a specific model, some possibilities as well as limitations of computer processing directed toward both informative and experiential/affective aspects of metaphor. A background to the discussion is given by other computational treatments of metaphor analysis, as well as by some questions about metaphor originating in other disciplines. The approach on which the present metaphor analysis model is based is consistent with a theory of language comprehension that includes both the intent of the originator and the effect on the recipient of the metaphor. The model addresses the dual problem of (a) determining potentially salient properties of the vehicle concept, and (b) defining extensible symbolic representations of such properties, including affective and other connotations. The nature of the linguistic analysis underlying the model suggests how metaphoric expression of experiential components in abstract metaphor is dependent on the nominalization of actions and attributes. The inverse process of undoing such nominalizations in computer analysis of metaphor constitutes a translation of a metaphor to a more literal expression within the metaphor-nonmetaphor dichotomy

    Metaphoric coherence: Distinguishing verbal metaphor from `anomaly\u27

    Get PDF
    Theories and computational models of metaphor comprehension generally circumvent the question of metaphor versus “anomaly” in favor of a treatment of metaphor versus literal language. Making the distinction between metaphoric and “anomalous” expressions is subject to wide variation in judgment, yet humans agree that some potentially metaphoric expressions are much more comprehensible than others. In the context of a program which interprets simple isolated sentences that are potential instances of cross‐modal and other verbal metaphor, I consider some possible coherence criteria which must be satisfied for an expression to be “conceivable” metaphorically. Metaphoric constraints on object nominals are represented as abstracted or extended along with the invariant structural components of the verb meaning in a metaphor. This approach distinguishes what is preserved in metaphoric extension from that which is “violated”, thus referring to both “similarity” and “dissimilarity” views of metaphor. The role and potential limits of represented abstracted properties and constraints is discussed as they relate to the recognition of incoherent semantic combinations and the rejection or adjustment of metaphoric interpretations

    Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation

    Full text link
    Question Generation (QG) is fundamentally a simple syntactic transformation; however, many aspects of semantics influence what questions are good to form. We implement this observation by developing Syn-QG, a set of transparent syntactic rules leveraging universal dependencies, shallow semantic parsing, lexical resources, and custom rules which transform declarative sentences into question-answer pairs. We utilize PropBank argument descriptions and VerbNet state predicates to incorporate shallow semantic content, which helps generate questions of a descriptive nature and produce inferential and semantically richer questions than existing systems. In order to improve syntactic fluency and eliminate grammatically incorrect questions, we employ back-translation over the output of these syntactic rules. A set of crowd-sourced evaluations shows that our system can generate a larger number of highly grammatical and relevant questions than previous QG systems and that back-translation drastically improves grammaticality at a slight cost of generating irrelevant questions.Comment: Some of the results in the paper were incorrec
    corecore