11 research outputs found

    Semantic Memory

    Get PDF
    How is it that we know what a dog and a tree are, or, for that matter, what knowledge is? Our semantic memory consists of knowledge about the world, including concepts, facts and beliefs. This knowledge is essential for recognizing entities and objects, and for making inferences and predictions about the world. In essence, our semantic knowledge determines how we understand and interact with the world around us. In this chapter, we examine semantic memory from cognitive, sensorimotor, cognitive neuroscientific, and computational perspectives. We consider the cognitive and neural processes (and biases) that allow people to learn and represent concepts, and discuss how and where in the brain sensory and motor information may be integrated to allow for the perception of a coherent “concept”. We suggest that our understanding of semantic memory can be enriched by considering how semantic knowledge develops across the lifespan within individuals

    Naturalistic Word-Concept Pair Learning With Semantic Spaces

    No full text
    We describe a model designed to learn word-concept pairings using a combination of semantic space models. We compare various semantic space models to each other as well as to extant word-learning models in the literature and find that not only do semantic space models require fewer underlying assumptions, they perform at least on par with existing associative models. We also demonstrate that semantic space models correctly predict different word-concept pairings from existing models and can be combined with existing models to perform better than either model can individually

    Organizing the space and behavior of semantic models

    No full text
    Semantic models play an important role in cognitive science. These models use statistical learning to model word meanings from co-occurrences in text corpora. A wide variety of semantic models have been proposed, and the literature has typically emphasized situations in which one model outperforms another. However, because these models often vary with respect to multiple sub-processes (e.g., their normalization or dimensionality-reduction methods), it can be difficult to delineate which of these processes are responsible for observed performance differences. Furthermore, the fact that any two models may vary along multiple dimensions makes it difficult to understand where these models fall within the space of possible psychological theories. In this paper, we propose a general framework for organizing the space of semantic models. We then illustrate how this framework can be used to understand model comparisons in terms of individual manipulations along sub-processes. Using several artificial datasets we show how both representational structure and dimensionality-reduction influence a model’s ability to pick up on different types of word relationships

    Multimodal word meaning induction from minimal exposure to natural text

    No full text
    By the time they reach early adulthood, English speakers are familiar with the meaning of thousands of words. In the last decades, computational simulations known as distributional semantic models (DSMs) have demonstrated that it is possible to induce word meaning representations solely from word co‐occurrence statistics extracted from a large amount of text. However, while these models learn in batch mode from large corpora, human word learning proceeds incrementally after minimal exposure to new words. In this study, we run a set of experiments investigating whether minimal distributional evidence from very short passages suffices to trigger successful word learning in subjects, testing their linguistic and visual intuitions about the concepts associated with new words. After confirming that subjects are indeed very efficient distributional learners even from small amounts of evidence, we test a DSM on the same multimodal task, finding that it behaves in a remarkable human‐like way. We conclude that DSMs provide a convincing computational account of word learning even at the early stages in which a word is first encountered, and the way they build meaning representations can offer new insights into human language acquisition.We thank the Cognitive Science editor and reviewers for constructive criticism. We also received useful feedback from the audience at *SEM 2015 and the International Meeting of the Psychonomic Society 2016. We acknowledge ERC 2011 Starting Independent Research Grant number 283554 (COMPOSES project). Marco Marelli conducted most of the work reported in this article while employed by the University of Trento. All authors equally contributed to the reported work
    corecore