37 research outputs found

    Putting concepts into context

    Get PDF
    Published online: 9 June 2016At first glance, conceptual representations (e.g., our internal notion of the object Blemon^) seem static; we have the impression that there is something that the concept lemon Bmeans^ (a sour, yellow, football-shaped citrus fruit) and that this meaning does not vary. Research in semantic memory has traditionally taken this Bstatic^ perspective. Consequently, only effects demonstrated across a variety of contexts have typically been considered informative regarding the architecture of the semantic system. In this review, we take the opposite approach: We review instances of context-dependent conceptual activation at many different timescales—from long-term experience, to recent experience, to the current task goals, to the unfolding process of conceptual activation itself—and suggest that the pervasive effects of context across all of these timescales indicate that rather than being static, conceptual representations are constantly changing and are inextricably linked to their contexts

    Speaker matters: Natural inter-speaker variation affects 4-month-olds’ perception of audio-visual speech

    Get PDF
    First Published September 27, 2019In the language development literature, studies often make inferences about infants’ speech perception abilities based on their responses to a single speaker. However, there can be significant natural variability across speakers in how speech is produced (i.e., inter-speaker differences). The current study examined whether inter-speaker differences can affect infants’ ability to detect a mismatch between the auditory and visual components of vowels. Using an eye-tracker, 4.5-month-old infants were tested on auditory-visual (AV) matching for two vowels (/i/ and /u/). Critically, infants were tested with two speakers who naturally differed in how distinctively they articulated the two vowels within and across the categories. Only infants who watched and listened to the speaker whose visual articulations of the two vowels were most distinct from one another were sensitive to AV mismatch. This speaker also produced a visually more distinct /i/ as compared to the other speaker. This finding suggests that infants are sensitive to the distinctiveness of AV information across speakers, and that when making inferences about infants’ perceptual abilities, characteristics of the speaker should be taken into account.The author(s) disclosed receipt of the following financial support for the research, authorship and/ or publication of this article: This research was funded by the grant PSI2014-5452-P from the Spanish Ministry of Economy and Competitiveness to M.M. The authors also acknowledge financial support from the ‘Severo Ochoa Program for Centers/Units of Excellence in R&D’ (SEV-2015-490) and from the Basque Government ‘Programa Predoctoral’ to J.P

    Encoding and inhibition of arbitrary episodic context with abstract concepts

    Get PDF
    Published online: 18 August 2021Context is critical for conceptual processing, but the mechanism underpinning its encoding and reinstantiation during abstract concept processing is unclear. Context may be especially important for abstract concepts—we investigated whether episodic context is recruited differently when processing abstract compared with concrete concepts. Experiments 1 and 2 presented abstract and concrete words in arbitrary contexts at encoding (Experiment 1: red/green colored frames; Experiment 2: male/female voices). Recognition memory for these contexts was worse for abstract concepts. Again using frame color and voice as arbitrary contexts, respectively, Experiments 3 and 4 presented words from encoding in the same or different context at test to determine whether there was a greater recognition memory benefit for abstract versus concrete concepts when the context was unchanged between encoding and test. Instead, abstract concepts were less likely to be remembered when context was retained. This suggests that at least some types of episodic context—when arbitrary—are attended less, and may even be inhibited, when processing abstract concepts. In Experiment 5, we utilized a context—spatial location—which (as we show) tends to be relevant during real-world processing of abstract concepts.We presented words in different locations, preserving or changing location at test. Location retention conferred a recognitionmemory advantage for abstract concepts. Thus, episodic context may be encoded with abstract concepts when context is relevant to real-world processing. The systematic contexts necessary for understanding abstract concepts may lead to arbitrary context inhibition, but greater attention to contexts that tend to be more relevant during real-world processing

    Time as an embodied property of concepts

    No full text

    Data

    No full text

    Study 1

    No full text

    Building semantic memory from embodied and distributional language experience

    No full text
    Humans seamlessly make sense of a rapidly changing environment, using a seemingly limitless knowledgebase to recognize and adapt to most situations we encounter. This knowledgebase is called semantic memory. Embodied cognition theories suggest that we represent this knowledge through simulation: understanding the meaning of coffee entails re-instantiating the neural states involved in touching, smelling, seeing, and drinking coffee. Distributional semantic theories suggest that we are sensitive to statistical regularities in natural language, and that a cognitive mechanism picks up on these regularities and transforms them into usable semantic representations reflecting the contextual usage of language. These appear to present contrasting views on semantic memory, but do they? Recent years have seen a push toward combining these approaches under a common framework. These hybrid approaches augment our understanding of semantic memory in important ways, but current versions remain unsatisfactory in part because they treat sensory-perceptual and distributional-linguistic data as interacting but distinct types of data that must be combined. We synthesize several approaches which, taken together, suggest that linguistic and embodied experience should instead be considered as inseparably entangled: just as sensory and perceptual systems are reactivated to understand meaning, so are experience-based representations endemic to linguistic processing; further, sensory-perceptual experience is susceptible to the same distributional principles as language experience. This conclusion produces a characterization of semantic memory that accounts for the interdependencies between linguistic and embodied data that arise across multiple timescales, giving rise to concept representations that reflect our shared and unique experiences

    Is time an embodied property of concepts?

    No full text
    A haircut usually lasts under an hour. But how long does it take to recognize that something is an instance of a haircut? And is this “time-to-perceive” a part of the representation of haircuts? Across three experiments testing semantic decision, word recognition, and lexical decision, we show that the amount of time people say it takes to perceive something in the world (e.g., haircut, dandelion, or merit) predicts how long it takes for them to respond to a word referring to that thing, over and above the effects of other lexical-semantic variables (e.g., word frequency, concreteness) and other variables related to conceptual complexity (e.g., how much physical space is required to perceive a concept, or the diversity of the contexts in which a concept appears). These results suggest that our experience of how long it takes to recognize an instance of something can become a part of its representation, and that we simulate this information when we read a word referring to it. Consequently, we suggest that time may be an embodied property of concepts

    Do you really want me to call it that? An object’s shape can affect how we produce its name

    No full text
    During language comprehension, certain labels tend to be associated with certain types of shapes; for instance, “kika” with angular shapes, and “buba” with rounded ones. But can sound-shape correspondences affect how language is produced as well? Here we ask whether labels (e.g., “buba” vs. “kika”) are produced differently based on the shapes of the objects with which they are paired (e.g., rounded vs. angular). Taking as a starting point prior research showing that the production of real words is affected by their contextual predictability, we created conditions where, based on known sound shape correspondences, object labels were either congruent (i.e. predictable) or incongruent with the shapes of their referents. Across two experiments that differed with respect to whether a listener was physically present in the room, we found that incongruent labels were produced more slowly than congruent ones. Thus, our findings show that sound-shape biases are not limited to language comprehension; instead, expectations about sound-shape correspondences themselves can shape word production
    corecore