2,636 research outputs found

    Input effects on the acquisition of a novel phrasal construction in five year olds

    Get PDF
    The present experiments demonstrate that children as young as five years old (M = 5;2) generalize beyond their input on the basis of minimal exposure to a novel argument structure construction. The novel construction that was used involved a non-English phrasal pattern: VN1N2, paired with a novel abstract meaning: N2 approaches N1. At the same time, we find that children are keenly sensitive to the input: they show knowledge of the construction after a single day of exposure but this grows stronger after three days; also, children generalize more readily to new verbs when the input contains more than one verb

    Who controls who (or what)

    Get PDF
    Language can be used to bridge the gap between expert knowledge and ability to act. I argue that this function is grammaticalized in imperatives (and in some languages, larger paradigms of directives), and that this becomes evident in restrictions on the (co-)reference of their subjects. I develop an account of the conventional semantics of imperatives and directives in general that associates the prohibited constellations with conflicting discourse requirements

    Opposition theory and computational semiotics

    Get PDF
    Opposition theory suggests that binary oppositions (e.g., high vs. low) underlie basic cognitive and linguistic processes. However, opposition theory has never been implemented in a computational cognitive-semiotics model. In this paper, we present a simple model of metaphor identification that relies on opposition theory. An algorithm instantiating the model has been tested on a data set of 100 phrases comprising adjective-noun pairs in which approximately a half represent metaphorical language-use (e.g., dark thoughts) and the rest literal language-use (e.g., dark hair). The algorithm achieved 89% accuracy in metaphor identification and illustrates the relevance of opposition theory for modelling metaphor processing

    Sense and preference

    Get PDF
    AbstractSemantic networks have shown considerable utility as a knowledge representation for Natural Language Processing (NLP). This paper describes a system for automatically deriving network structures from machine-readable dictionary text. This strategy helps to solve the problem of vocabulary acquisition for large-scale parsing systems, but also introduces an extra level of difficulty in terms of word-sense ambiguity. A Preference Semantics parsing system that operates over this network is discussed, in particular as regards its mechanism for using the network for lexical selection

    Metaphoric coherence: Distinguishing verbal metaphor from `anomaly\u27

    Get PDF
    Theories and computational models of metaphor comprehension generally circumvent the question of metaphor versus “anomaly” in favor of a treatment of metaphor versus literal language. Making the distinction between metaphoric and “anomalous” expressions is subject to wide variation in judgment, yet humans agree that some potentially metaphoric expressions are much more comprehensible than others. In the context of a program which interprets simple isolated sentences that are potential instances of cross‐modal and other verbal metaphor, I consider some possible coherence criteria which must be satisfied for an expression to be “conceivable” metaphorically. Metaphoric constraints on object nominals are represented as abstracted or extended along with the invariant structural components of the verb meaning in a metaphor. This approach distinguishes what is preserved in metaphoric extension from that which is “violated”, thus referring to both “similarity” and “dissimilarity” views of metaphor. The role and potential limits of represented abstracted properties and constraints is discussed as they relate to the recognition of incoherent semantic combinations and the rejection or adjustment of metaphoric interpretations

    Dealing with Metonymic Readings of Named Entities

    Full text link
    The aim of this paper is to propose a method for tagging named entities (NE), using natural language processing techniques. Beyond their literal meaning, named entities are frequently subject to metonymy. We show the limits of current NE type hierarchies and detail a new proposal aiming at dynamically capturing the semantics of entities in context. This model can analyze complex linguistic phenomena like metonymy, which are known to be difficult for natural language processing but crucial for most applications. We present an implementation and some test using the French ESTER corpus and give significant results

    EntailE: Introducing Textual Entailment in Commonsense Knowledge Graph Completion

    Full text link
    Commonsense knowledge graph completion is a new challenge for commonsense knowledge graph construction and application. In contrast to factual knowledge graphs such as Freebase and YAGO, commonsense knowledge graphs (CSKGs; e.g., ConceptNet) utilize free-form text to represent named entities, short phrases, and events as their nodes. Such a loose structure results in large and sparse CSKGs, which makes the semantic understanding of these nodes more critical for learning rich commonsense knowledge graph embedding. While current methods leverage semantic similarities to increase the graph density, the semantic plausibility of the nodes and their relations are under-explored. Previous works adopt conceptual abstraction to improve the consistency of modeling (event) plausibility, but they are not scalable enough and still suffer from data sparsity. In this paper, we propose to adopt textual entailment to find implicit entailment relations between CSKG nodes, to effectively densify the subgraph connecting nodes within the same conceptual class, which indicates a similar level of plausibility. Each node in CSKG finds its top entailed nodes using a finetuned transformer over natural language inference (NLI) tasks, which sufficiently capture textual entailment signals. The entailment relation between these nodes are further utilized to: 1) build new connections between source triplets and entailed nodes to densify the sparse CSKGs; 2) enrich the generalization ability of node representations by comparing the node embeddings with a contrastive loss. Experiments on two standard CSKGs demonstrate that our proposed framework EntailE can improve the performance of CSKG completion tasks under both transductive and inductive settings.Comment: 10 pages, 5 figures, 9 table
    • 

    corecore