14,978 research outputs found

    A Proof-Theoretic Approach to Scope Ambiguity in Compositional Vector Space Models

    Full text link
    We investigate the extent to which compositional vector space models can be used to account for scope ambiguity in quantified sentences (of the form "Every man loves some woman"). Such sentences containing two quantifiers introduce two readings, a direct scope reading and an inverse scope reading. This ambiguity has been treated in a vector space model using bialgebras by (Hedges and Sadrzadeh, 2016) and (Sadrzadeh, 2016), though without an explanation of the mechanism by which the ambiguity arises. We combine a polarised focussed sequent calculus for the non-associative Lambek calculus NL, as described in (Moortgat and Moot, 2011), with the vector based approach to quantifier scope ambiguity. In particular, we establish a procedure for obtaining a vector space model for quantifier scope ambiguity in a derivational way.Comment: This is a preprint of a paper to appear in: Journal of Language Modelling, 201

    Predicting and Explaining Human Semantic Search in a Cognitive Model

    Full text link
    Recent work has attempted to characterize the structure of semantic memory and the search algorithms which, together, best approximate human patterns of search revealed in a semantic fluency task. There are a number of models that seek to capture semantic search processes over networks, but they vary in the cognitive plausibility of their implementation. Existing work has also neglected to consider the constraints that the incremental process of language acquisition must place on the structure of semantic memory. Here we present a model that incrementally updates a semantic network, with limited computational steps, and replicates many patterns found in human semantic fluency using a simple random walk. We also perform thorough analyses showing that a combination of both structural and semantic features are correlated with human performance patterns.Comment: To appear in proceedings for CMCL 201

    Information theory and representation in associative word learning

    Get PDF
    A significant portion of early language learning can be viewed as an associative learning problem. We investigate the use of associative language learning based on the principle that words convey Shannon information about the environment. We discuss the shortcomings in representation used by previous associative word learners and propose a functional representation that not only denotes environmental categories, but serves as the basis for activities and interaction with the environment. We present experimental results with an autonomous agent acquiring language

    Creativity and the Brain

    Get PDF
    Neurocognitive approach to higher cognitive functions that bridges the gap between psychological and neural level of description is introduced. Relevant facts about the brain, working memory and representation of symbols in the brain are summarized. Putative brain processes responsible for problem solving, intuition, skill learning and automatization are described. The role of non-dominant brain hemisphere in solving problems requiring insight is conjectured. Two factors seem to be essential for creativity: imagination constrained by experience, and filtering that selects most interesting solutions. Experiments with paired words association are analyzed in details and evidence for stochastic resonance effects is found. Brain activity in the process of invention of novel words is proposed as the simplest way to understand creativity using experimental and computational means. Perspectives on computational models of creativity are discussed

    Types and forgetfulness in categorical linguistics and quantum mechanics

    Full text link
    The role of types in categorical models of meaning is investigated. A general scheme for how typed models of meaning may be used to compare sentences, regardless of their grammatical structure is described, and a toy example is used as an illustration. Taking as a starting point the question of whether the evaluation of such a type system 'loses information', we consider the parametrized typing associated with connectives from this viewpoint. The answer to this question implies that, within full categorical models of meaning, the objects associated with types must exhibit a simple but subtle categorical property known as self-similarity. We investigate the category theory behind this, with explicit reference to typed systems, and their monoidal closed structure. We then demonstrate close connections between such self-similar structures and dagger Frobenius algebras. In particular, we demonstrate that the categorical structures implied by the polymorphically typed connectives give rise to a (lax unitless) form of the special forms of Frobenius algebras known as classical structures, used heavily in abstract categorical approaches to quantum mechanics.Comment: 37 pages, 4 figure
    corecore