203 research outputs found

    Learning Warps Object Representations in the Ventral Temporal Cortex.

    Get PDF
    The human ventral temporal cortex (VTC) plays a critical role in object recognition. Although it is well established that visual experience shapes VTC object representations, the impact of semantic and contextual learning is unclear. In this study, we tracked changes in representations of novel visual objects that emerged after learning meaningful information about each object. Over multiple training sessions, participants learned to associate semantic features (e.g., "made of wood," "floats") and spatial contextual associations (e.g., "found in gardens") with novel objects. fMRI was used to examine VTC activity for objects before and after learning. Multivariate pattern similarity analyses revealed that, after learning, VTC activity patterns carried information about the learned contextual associations of the objects, such that objects with contextual associations exhibited higher pattern similarity after learning. Furthermore, these learning-induced increases in pattern information about contextual associations were correlated with reductions in pattern information about the object's visual features. In a second experiment, we validated that these contextual effects translated to real-life objects. Our findings demonstrate that visual object representations in VTC are shaped by the knowledge we have about objects and show that object representations can flexibly adapt as a consequence of learning with the changes related to the specific kind of newly acquired information.This project has received funding to LKT from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 669820), from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007 - 2013)/ERC Grant agreement n° 249640, and a Guggenheim Fellowship to CR.This is the final version of the article. It first appeared from MIT Press via http://dx.doi.org/10.1162/jocn_a_0095

    Functional neuroanatomy of contextual acquisition of concrete and abstract words

    Get PDF
    The meaning of a novel word can be acquired by extracting it from linguistic context. Here we simulated word learning of new words associated to concrete and abstract concepts in a variant of the human simulation paradigm that provided linguistic context information in order to characterize the brain systems involved. Native speakers of Spanish read pairs of sentences in order to derive the meaning of a new word that appeared in the terminal position of the sentences. fMRI revealed that learning the meaning associated to concrete and abstract new words was qualitatively different and recruited similar brain regions as the processing of real concrete and abstract words. In particular, learning of new concrete words selectively boosted the activation of the ventral anterior fusiform gyrus, a region driven by imageability, which has previously been implicated in the processing of concrete words

    Neural Responses to Naturalistic Clips of Behaving Animals Under Two Different Task Contexts

    Get PDF
    The human brain rapidly deploys semantic information during perception to facilitate our interaction with the world. These semantic representations are encoded in the activity of distributed populations of neurons (Haxby et al., 2001; McClelland and Rogers, 2003; Kriegeskorte et al., 2008b) and command widespread cortical real estate (Binder et al., 2009; Huth et al., 2012). The neural representation of a stimulus can be described as a location (i.e., response vector) in a high-dimensional neural representational space (Kriegeskorte and Kievit, 2013; Haxby et al., 2014). This resonates with behavioral and theoretical work describing mental representations of objects and actions as being organized in a multidimensional psychological space (Attneave, 1950; Shepard, 1958, 1987; Edelman, 1998; GĂ€rdenfors and Warglien, 2012). Current applications of this framework to neural representation (e.g., Kriegeskorte et al., 2008b) often implicitly assume that these neural representational spaces are relatively fixed and context-invariant. In contrast, earlier work emphasized the importance of attention and task demands in actively reshaping representational space (Shepard, 1964; Tversky, 1977; Nosofsky, 1986; Kruschke, 1992). A growing body of work in both electrophysiology (e.g., Sigala and Logothetis, 2002; Sigala, 2004; Cohen and Maunsell, 2009; Reynolds and Heeger, 2009) and human neuroimaging (e.g., Hon et al., 2009; Jehee et al., 2011; Brouwer and Heeger, 2013; Çukur et al., 2013; Sprague and Serences, 2013; Harel et al., 2014; Erez and Duncan, 2015; Nastase et al., 2017) has suggested mechanisms by which behavioral goals dynamically alter neural representation

    A goal direction signal in the human entorhinal/subicular region

    Get PDF
    Being able to navigate to a safe place, such as a home or nest, is a fundamental behaviour for all complex animals. Determining the direction to such goals is a crucial first step in navigation. Surprisingly, little is known about how, or where in the brain, this 'goal direction signal' is represented. In mammals 'head-direction cells' are thought to support this process, but despite 30 years of research no evidence for a goal direction representation has been reported [1, 2]. Here we used functional magnetic resonance imaging to record neural activity while participants made goal directions judgments based on a previously learned virtual environment. We applied multivoxel pattern analysis [3-5] to this data, and found that the human entorhinal/subicular region contains a neural representation of intended goal direction. Furthermore, the neural pattern expressed for a given goal direction matched the pattern expressed when simply facing that same direction. This suggests the existence of a shared neural representation of both goal and facing direction. We argue that this reflects a mechanism based on head-direction populations that simulate future goal directions during route planning [6]. Our data further revealed that the strength of direction information predicts performance. Finally, we found a dissociation between this geocentric information in the entorhinal/subicular region and egocentric direction information in the precuneus

    How may the basal ganglia contribute to auditory categorization and speech perception?

    Get PDF
    Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood

    Semantic Memory

    Get PDF
    How is it that we know what a dog and a tree are, or, for that matter, what knowledge is? Our semantic memory consists of knowledge about the world, including concepts, facts and beliefs. This knowledge is essential for recognizing entities and objects, and for making inferences and predictions about the world. In essence, our semantic knowledge determines how we understand and interact with the world around us. In this chapter, we examine semantic memory from cognitive, sensorimotor, cognitive neuroscientific, and computational perspectives. We consider the cognitive and neural processes (and biases) that allow people to learn and represent concepts, and discuss how and where in the brain sensory and motor information may be integrated to allow for the perception of a coherent “concept”. We suggest that our understanding of semantic memory can be enriched by considering how semantic knowledge develops across the lifespan within individuals

    The role of language in the experience and perception of emotion: a neuroimaging meta-analysis

    Get PDF
    Recent behavioral and neuroimaging studies demonstrate that labeling one’s emotional experiences and perceptions alters those states. Here, we used a comprehensive meta-analysis of the neuroimaging literature to systematically explore whether the presence of emotion words in experimental tasks has an impact on the neural representation of emotional experiences and perceptions across studies. Using a database of 386 studies, we assessed brain activity when emotion words (e.g. ‘anger’, ‘disgust’) and more general affect words (e.g. ‘pleasant’, ‘unpleasant’) were present in experimental tasks vs not present. As predicted, when emotion words were present, we observed more frequent activations in regions related to semantic processing. When emotion words were not present, we observed more frequent activations in the amygdala and parahippocampal gyrus, bilaterally. The presence of affect words did not have the same effect on the neural representation of emotional experiences and perceptions, suggesting that our observed effects are specific to emotion words. These findings are consistent with the psychological constructionist prediction that in the absence of accessible emotion concepts, the meaning of affective experiences and perceptions are ambiguous. Findings are also consistent with the regulatory role of ‘affect labeling’. Implications of the role of language in emotion construction and regulation are discussed

    Content-based Dissociation of Hippocampal Involvement in Prediction

    Get PDF
    Recent work suggests that a key function of the hippocampus is to predict the future. This is thought to depend on its ability to bind inputs over time and space and to retrieve upcoming or missing inputs based on partial cues. In line with this, previous research has revealed prediction-related signals in the hippocampus for complex visual objects, such as fractals and abstract shapes. Implicit in such accounts is that these computations in the hippocampus reflect domain-general processes that apply across different types and modalities of stimuli. An alternative is that the hippocampus plays a more domain-specific role in predictive processing, with the type of stimuli being predicted determining its involvement. To investigate this, we compared hippocampal responses to auditory cues predicting abstract shapes (Experiment 1) versus oriented gratings (Experiment 2). We measured brain activity in male and female human participants using high-resolution fMRI, in combination with inverted encoding models to reconstruct shape and orientation information. Our results revealed that expectations about shape and orientation evoked distinct representations in the hippocampus. For complex shapes, the hippocampus represented which shape was expected, potentially serving as a source of top-down predictions. In contrast, for simple gratings, the hippocampus represented only unexpected orientations, more reminiscent of a prediction error. We discuss several potential explanations for this content-based dissociation in hippocampal function, concluding that the computational role of the hippocampus in predictive processing may depend on the nature and complexity of stimuli
    • 

    corecore