14 research outputs found

    Learning an Executable Neural Semantic Parser

    Get PDF
    This paper describes a neural semantic parser that maps natural language utterances onto logical forms which can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response. The parser generates tree-structured logical forms with a transition-based approach which combines a generic tree-generation algorithm with domain-general operations defined by the logical language. The generation process is modeled by structured recurrent neural networks, which provide a rich encoding of the sentential context and generation history for making predictions. To tackle mismatches between natural language and logical form tokens, various attention mechanisms are explored. Finally, we consider different training settings for the neural semantic parser, including a fully supervised training where annotated logical forms are given, weakly-supervised training where denotations are provided, and distant supervision where only unlabeled sentences and a knowledge base are available. Experiments across a wide range of datasets demonstrate the effectiveness of our parser.Comment: In Journal of Computational Linguistic

    Assessment of text coherence using an ontology-based relatedness measurement method

    No full text
    UNALIR, Murat Osman/0000-0003-4531-0566; Giray, Gorkem/0000-0002-7023-9469WOS: 000502638200001This paper proposes a novel method for assessing text coherence. Central to this approach is an ontology-based representation of text, which captures the level of relatedness between consecutive sentences via ontologies. Our method encompasses annotating text using ontological concepts and assessing text coherence based on relatedness measurement among these concepts. the ontology-based relatedness measurement method used in this study considers various types of relationships in ontologies and derived relationships via an inference engine for computing relatedness. We hypothesized that rich variety of relationships and inferred facts in ontologies would improve the success of text coherence assessment. Our results demonstrate that the use of ontologies yields to coherence values that have a higher correlation with human ratings

    What plausibly affects plausibility?:concept coherence and distributional word coherence as factors influencing plausibility judgments

    No full text
    Our goal was to investigate the basis of human plausibility judgements. Previous research had suggested that plausibility is affected by two factors: concept coherence (the inferences made between parts of a discourse) and word coherence (the distributional properties of the words used). In two experiments, participants were asked to rate the plausibility of sentence pairs describing events. In the first, we manipulated concept coherence by using different inference types to link the sentences in a pair (e.g., causal or temporal). In the second, we manipulated word coherence by using latent semantic analysis, so two sentence pairs describing the same event had different distributional properties. The results showed that inference type affects plausibility; sentence pairs linked by causal inferences were rated highest, followed by attributal, temporal, and unrelated inferences. The distributional manipulations had no reliable effect on plausibility ratings. We conclude that the processes involved in rating plausibility are based on evaluating concept coherence, not word coherence
    corecore