119 research outputs found

    Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects.

    Get PDF
    Understanding the meanings of words and objects requires the activation of underlying conceptual representations. Semantic representations are often assumed to be coded such that meaning is evoked regardless of the input modality. However, the extent to which meaning is coded in modality-independent or amodal systems remains controversial. We address this issue in a human fMRI study investigating the neural processing of concepts, presented separately as written words and pictures. Activation maps for each individual word and picture were used as input for searchlight-based multivoxel pattern analyses. Representational similarity analysis was used to identify regions correlating with low-level visual models of the words and objects and the semantic category structure common to both. Common semantic category effects for both modalities were found in a left-lateralized network, including left posterior middle temporal gyrus (LpMTG), left angular gyrus, and left intraparietal sulcus (LIPS), in addition to object- and word-specific semantic processing in ventral temporal cortex and more anterior MTG, respectively. To explore differences in representational content across regions and modalities, we developed novel data-driven analyses, based on k-means clustering of searchlight dissimilarity matrices and seeded correlation analysis. These revealed subtle differences in the representations in semantic-sensitive regions, with representations in LIPS being relatively invariant to stimulus modality and representations in LpMTG being uncorrelated across modality. These results suggest that, although both LpMTG and LIPS are involved in semantic processing, only the functional role of LIPS is the same regardless of the visual input, whereas the functional role of LpMTG differs for words and objects.This work was supported by the European Research CouncilThis is the final version of an article originally published in the Journal of Neuroscience and available online at http://www.jneurosci.org/content/33/48/18906.abstract

    Human midcingulate cortex encodes distributed representations of task progress

    Get PDF
    The function of midcingulate cortex (MCC) remains elusive despite decades of investigation and debate. Complicating matters, individual MCC neurons respond to highly diverse task-related events, and MCC activation is reported in most human neuroimaging studies employing a wide variety of task manipulations. Here we investigate this issue by applying a model-based cognitive neuroscience approach involving neural network simulations, functional magnetic resonance imaging, and representational similarity analysis. We demonstrate that human MCC encodes distributed, dynamically evolving representations of extended, goal-directed action sequences. These representations are uniquely sensitive to the stage and identity of each sequence, indicating that MCC sustains contextual information necessary for discriminating between task states. These results suggest that standard univariate approaches for analyzing MCC function overlook the major portion of task-related information encoded by this brain area and point to promising new avenues for investigation

    Disentangling Representations of Object and Grasp Properties in the Human Brain

    Get PDF
    Contains fulltext : 159037.pdf (publisher's version ) (Open Access)The properties of objects, such as shape, influence the way we grasp them. To quantify the role of different brain regions during grasping, it is necessary to disentangle the processing of visual dimensions related to object properties from the motor aspects related to the specific hand configuration. We orthogonally varied object properties (shape, size, and elongation) and task (passive viewing, precision grip with two or five digits, or coarse grip with five digits) and used representational similarity analysis of functional magnetic resonance imaging data to infer the representation of object properties and hand configuration in the human brain. We found that object elongation is the most strongly represented object feature during grasping and is coded preferentially in the primary visual cortex as well as the anterior and posterior superior-parieto-occipital cortex. By contrast, primary somatosensory, motor, and ventral premotor cortices coded preferentially the number of digits while ventral-stream and dorsal-stream regions coded a mix of visual and motor dimensions. The representation of object features varied with task modality, as object elongation was less relevant during passive viewing than grasping. To summarize, this study shows that elongation is a particularly relevant property of the object to grasp, which along with the number of digits used, is represented within both ventral-stream and parietal regions, suggesting that communication between the two streams about these specific visual and motor dimensions might be relevant to the execution of efficient grasping actions. SIGNIFICANCE STATEMENT: To grasp something, the visual properties of an object guide preshaping of the hand into the appropriate configuration. Different grips can be used, and different objects require different hand configurations. However, in natural actions, grip and object type are often confounded, and the few experiments that have attempted to separate them have produced conflicting results. As such, it is unclear how visual and motor properties are represented across brain regions during grasping. Here we orthogonally manipulated object properties and grip, and revealed the visual dimension (object elongation) and the motor dimension (number of digits) that are more strongly coded in ventral and dorsal streams. These results suggest that both streams play a role in the visuomotor coding essential for grasping.15 p

    Enhanced hyperalignment via spatial prior information

    Get PDF
    Functional alignment between subjects is an important assumption of functional magnetic resonance imaging (fMRI) group-level analysis. However, it is often violated in practice, even after alignment to a standard anatomical template. Hyperalignment, based on sequential Procrustes orthogonal transformations, has been proposed as a method of aligning shared functional information into a common high-dimensional space and thereby improving inter-subject analysis. Though successful, current hyperalignment algorithms have a number of shortcomings, including difficulties interpreting the transformations, a lack of uniqueness of the procedure, and difficulties performing whole-brain analysis. To resolve these issues, we propose the ProMises (Procrustes von Mises-Fisher) model. We reformulate functional alignment as a statistical model and impose a prior distribution on the orthogonal parameters (the von Mises-Fisher distribution). This allows for the embedding of anatomical information into the estimation procedure by penalizing the contribution of spatially distant voxels when creating the shared functional high-dimensional space. Importantly, the transformations, aligned images, and related results are all unique. In addition, the proposed method allows for efficient whole-brain functional alignment. In simulations and application to data from four fMRI studies we find that ProMises improves inter-subject classification in terms of between-subject accuracy and interpretability compared to standard hyperalignment algorithms.Comment: 28 pages, 9 figure

    PrAGMATiC: a Probabilistic and Generative Model of Areas Tiling the Cortex

    Full text link
    Much of the human cortex seems to be organized into topographic cortical maps. Yet few quantitative methods exist for characterizing these maps. To address this issue we developed a modeling framework that can reveal group-level cortical maps based on neuroimaging data. PrAGMATiC, a probabilistic and generative model of areas tiling the cortex, is a hierarchical Bayesian generative model of cortical maps. This model assumes that the cortical map in each individual subject is a sample from a single underlying probability distribution. Learning the parameters of this distribution reveals the properties of a cortical map that are common across a group of subjects while avoiding the potentially lossy step of co-registering each subject into a group anatomical space. In this report we give a mathematical description of PrAGMATiC, describe approximations that make it practical to use, show preliminary results from its application to a real dataset, and describe a number of possible future extensions

    Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text

    Get PDF
    Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations

    Assessing the representation of seen and unseen contents in human brains and deep artificial networks

    Get PDF
    119 p.The functional scope of unconscious visual information processing and its implementation in the human brain remains a highly contested issue in cognitive neuroscience. The influential global workspace and higher-order theories predict that unconscious visual processing is restricted to representations in the visual cortex, which are not read-out further by frontoparietal areas. The present thesis employs fMRI and computational approaches to develop a high-precision, within-subject framework in order to define the properties of the brain representations of unconscious content associated with null perceptual sensitivity. Machine learning models were used to read-out multivariate unconscious content from fMRI signals throughout the ventral visual pathway, and model-based representational similarity analysis examined the properties of both conscious and unconscious representations. Finally, feedforward convolutional neural network (FCNN) models were used to simulate the fMRI results, namely, to probe the existence of informative representations of visual objects with null perceptual sensitivity in artificial networks. The results show that even when human observers display null perceptual sensitivity at a behavioral level, there are neural representations of unconscious content widely distributed throughout the cortex, and these are not only contained in visual regions but also extend to higher-order regions in the ventral visual pathway, parietal and even prefrontal areas. The computational simulations with different FCNN models trained to perform the same visual task with noisy images demonstrated that even when the FCNN models failed to classify the category of the noisy images, the hidden representation of the FCNN models contained an informative representation that allowed for decoding of the image class. The implications of the results for models of visual consciousness are discusse
    corecore