300 research outputs found

    Understanding What We See: How We Derive Meaning From Vision.

    Get PDF
    Recognising objects goes beyond vision, and requires models that incorporate different aspects of meaning. Most models focus on superordinate categories (e.g., animals, tools) which do not capture the richness of conceptual knowledge. We argue that object recognition must be seen as a dynamic process of transformation from low-level visual input through categorical organisation to specific conceptual representations. Cognitive models based on large normative datasets are well-suited to capture statistical regularities within and between concepts, providing both category structure and basic-level individuation. We highlight recent research showing how such models capture important properties of the ventral visual pathway. This research demonstrates that significant advances in understanding conceptual representations can be made by shifting the focus from studying superordinate categories to basic-level concepts.We thank William Marslen-Wilson for his helpful comments on this manuscript. The research leading to these results has received funding to LKT from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013)/ ERC Grant agreement n° 249640.This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.tics.2015.08.00

    Object-specific semantic coding in human perirhinal cortex.

    Get PDF
    Category-specificity has been demonstrated in the human posterior ventral temporal cortex for a variety of object categories. Although object representations within the ventral visual pathway must be sufficiently rich and complex to support the recognition of individual objects, little is known about how specific objects are represented. Here, we used representational similarity analysis to determine what different kinds of object information are reflected in fMRI activation patterns and uncover the relationship between categorical and object-specific semantic representations. Our results show a gradient of informational specificity along the ventral stream from representations of image-based visual properties in early visual cortex, to categorical representations in the posterior ventral stream. A key finding showed that object-specific semantic information is uniquely represented in the perirhinal cortex, which was also increasingly engaged for objects that are more semantically confusable. These findings suggest a key role for the perirhinal cortex in representing and processing object-specific semantic information that is more critical for highly confusable objects. Our findings extend current distributed models by showing coarse dissociations between objects in posterior ventral cortex, and fine-grained distinctions between objects supported by the anterior medial temporal lobes, including the perirhinal cortex, which serve to integrate complex object information.This work was supported by funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007--2013)/ ERC Grant agreement no. 249640 to L.K.T

    Quantifying contextual contributions to word-recognition processes

    Full text link

    Oscillatory dynamics of perceptual to conceptual transformations in the ventral visual pathway

    Get PDF
    Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. While this process depends on the ventral visual pathway (VVP), we lack an incremental account from low-level inputs to semantic representations, and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics, and test the output of the incremental model against patterns of neural oscillations recorded with MEG in humans. Representational Similarity Analysis showed visual information was represented in alpha activity throughout the VVP, and semantic information was represented in theta activity. Furthermore, informational connectivity showed visual information travels through feedforward connections, while visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics

    Age-related functional reorganization, structural changes, and preserved cognition.

    Get PDF
    Although healthy aging is associated with general cognitive decline, there is considerable variability in the extent to which cognitive functions decline or are preserved. Preserved cognitive function in the context of age-related neuroanatomical and functional changes, has been attributed to compensatory mechanisms. However, the existing sparse evidence is largely focused on functions associated with the frontal cortex, leaving open the question of how wider age-related brain changes relate to compensation. We evaluated relationships between age-related neural and functional changes in the context of preserved cognitive function by combining measures of structure, function, and cognitive performance during spoken language comprehension using a paradigm that does not involve an explicit task. We used a graph theoretical approach to derive cognitive activation-related functional magnetic resonance imaging networks. Correlating network properties with age, neuroanatomical variations, and behavioral data, we found that decreased gray matter integrity was associated with decreased connectivity within key language regions but increased overall functional connectivity. However, this network reorganization was less efficient, suggesting that engagement of a more distributed network in aging might be triggered by reduced connectivity within specialized networks

    The perirhinal cortex and conceptual processing: Effects of feature-based statistics following damage to the anterior temporal lobes.

    Get PDF
    The anterior temporal lobe (ATL) plays a prominent role in models of semantic knowledge, although it remains unclear how the specific subregions within the ATL contribute to semantic memory. Patients with neurodegenerative diseases, like semantic dementia, have widespread damage to the ATL thus making inferences about the relationship between anatomy and cognition problematic. Here we take a detailed anatomical approach to ask which substructures within the ATL contribute to conceptual processing, with the prediction that the perirhinal cortex (PRc) will play a critical role for concepts that are more semantically confusable. We tested two patient groups, those with and without damage to the PRc, across two behavioural experiments - picture naming and word-picture matching. For both tasks, we manipulated the degree of semantic confusability of the concepts. By contrasting the performance of the two groups, along with healthy controls, we show that damage to the PRc results in worse performance in processing concepts with higher semantic confusability across both experiments. Further by correlating the degree of damage across anatomically defined regions of interest with performance, we find that PRc damage is related to performance for concepts with increased semantic confusability. Our results show that the PRc supports a necessary and crucial neurocognitve function that enables fine-grained conceptual processes to take place through the resolution of semantic confusability.This work was supported by funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013)/ERC Grant agreement no. 249640 to LKT.This is the final published version. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.neuropsychologia.2015.01.04

    Learning Warps Object Representations in the Ventral Temporal Cortex.

    Get PDF
    The human ventral temporal cortex (VTC) plays a critical role in object recognition. Although it is well established that visual experience shapes VTC object representations, the impact of semantic and contextual learning is unclear. In this study, we tracked changes in representations of novel visual objects that emerged after learning meaningful information about each object. Over multiple training sessions, participants learned to associate semantic features (e.g., "made of wood," "floats") and spatial contextual associations (e.g., "found in gardens") with novel objects. fMRI was used to examine VTC activity for objects before and after learning. Multivariate pattern similarity analyses revealed that, after learning, VTC activity patterns carried information about the learned contextual associations of the objects, such that objects with contextual associations exhibited higher pattern similarity after learning. Furthermore, these learning-induced increases in pattern information about contextual associations were correlated with reductions in pattern information about the object's visual features. In a second experiment, we validated that these contextual effects translated to real-life objects. Our findings demonstrate that visual object representations in VTC are shaped by the knowledge we have about objects and show that object representations can flexibly adapt as a consequence of learning with the changes related to the specific kind of newly acquired information.This project has received funding to LKT from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 669820), from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007 - 2013)/ERC Grant agreement n° 249640, and a Guggenheim Fellowship to CR.This is the final version of the article. It first appeared from MIT Press via http://dx.doi.org/10.1162/jocn_a_0095

    Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects.

    Get PDF
    Understanding the meanings of words and objects requires the activation of underlying conceptual representations. Semantic representations are often assumed to be coded such that meaning is evoked regardless of the input modality. However, the extent to which meaning is coded in modality-independent or amodal systems remains controversial. We address this issue in a human fMRI study investigating the neural processing of concepts, presented separately as written words and pictures. Activation maps for each individual word and picture were used as input for searchlight-based multivoxel pattern analyses. Representational similarity analysis was used to identify regions correlating with low-level visual models of the words and objects and the semantic category structure common to both. Common semantic category effects for both modalities were found in a left-lateralized network, including left posterior middle temporal gyrus (LpMTG), left angular gyrus, and left intraparietal sulcus (LIPS), in addition to object- and word-specific semantic processing in ventral temporal cortex and more anterior MTG, respectively. To explore differences in representational content across regions and modalities, we developed novel data-driven analyses, based on k-means clustering of searchlight dissimilarity matrices and seeded correlation analysis. These revealed subtle differences in the representations in semantic-sensitive regions, with representations in LIPS being relatively invariant to stimulus modality and representations in LpMTG being uncorrelated across modality. These results suggest that, although both LpMTG and LIPS are involved in semantic processing, only the functional role of LIPS is the same regardless of the visual input, whereas the functional role of LpMTG differs for words and objects.This work was supported by the European Research CouncilThis is the final version of an article originally published in the Journal of Neuroscience and available online at http://www.jneurosci.org/content/33/48/18906.abstract
    • …
    corecore