18 research outputs found

    Category selectivity in human visual cortex:beyond visual object recognition

    Get PDF
    Item does not contain fulltextHuman ventral temporal cortex shows a categorical organization, with regions responding selectively to faces, bodies, tools, scenes, words, and other categories. Why is this? Traditional accounts explain category selectivity as arising within a hierarchical system dedicated to visual object recognition. For example, it has been proposed that category selectivity reflects the clustering of category-associated visual feature representations, or that it reflects category-specific computational algorithms needed to achieve view invariance. This visual object recognition framework has gained renewed interest with the success of deep neural network models trained to "recognize" objects: these hierarchical feed-forward networks show similarities to human visual cortex, including categorical separability. We argue that the object recognition framework is unlikely to fully account for category selectivity in visual cortex. Instead, we consider category selectivity in the context of other functions such as navigation, social cognition, tool use, and reading. Category-selective regions are activated during such tasks even in the absence of visual input and even in individuals with no prior visual experience. Further, they are engaged in close connections with broader domain-specific networks. Considering the diverse functions of these networks, category-selective regions likely encode their preferred stimuli in highly idiosyncratic formats; representations that are useful for navigation, social cognition, or reading are unlikely to be meaningfully similar to each other and to varying degrees may not be entirely visual. The demand for specific types of representations to support category-associated tasks may best account for category selectivity in visual cortex. This broader view invites new experimental and computational approaches.7 p

    Evidencing a place for the hippocampus within the core scene processing network

    Get PDF
    Functional neuroimaging studies have identified several “core” brain regions that are preferentially activated by scene stimuli, namely posterior parahippocampal gyrus (PHG), retrosplenial cortex (RSC), and transverse occipital sulcus (TOS). The hippocampus (HC), too, is thought to play a key role in scene processing, although no study has yet investigated scene-sensitivity in the HC relative to these other “core” regions. Here, we characterised the frequency and consistency of individual scene-preferential responses within these regions by analysing a large dataset (n = 51) in which participants performed a one-back working memory task for scenes, objects, and scrambled objects. An unbiased approach was adopted by applying independently-defined anatomical ROIs to individual-level functional data across different voxel-wise thresholds and spatial filters. It was found that the majority of subjects had preferential scene clusters in PHG (max = 100% of participants), RSC (max = 76%), and TOS (max = 94%). A comparable number of individuals also possessed significant scene-related clusters within their individually defined HC ROIs (max = 88%), evidencing a HC contribution to scene processing. While probabilistic overlap maps of individual clusters showed that overlap “peaks” were close to those identified in group-level analyses (particularly for TOS and HC), inter-individual consistency varied across regions and statistical thresholds. The inter-regional and inter-individual variability revealed by these analyses has implications for how scene-sensitive cortex is localised and interrogated in functional neuroimaging studies, particularly in medial temporal lobe regions, such as the H

    Decoding natural sounds in early “visual” cortex of congenitally blind individuals

    Get PDF
    Complex natural sounds, such as bird singing, people talking, or traffic noise, induce decodable fMRI activation patterns in early visual cortex of sighted blindfolded participants [1]. That is, early visual cortex receives non-visual and potentially predictive information from audition. However, it is unclear whether the transfer of auditory information to early visual areas is an epiphenomenon of visual imagery or, alternatively, whether it is driven by mechanisms independent from visual experience. Here, we show that we can decode natural sounds from activity patterns in early “visual” areas of congenitally blind individuals who lack visual imagery. Thus, visual imagery is not a prerequisite of auditory feedback to early visual cortex. Furthermore, the spatial pattern of sound decoding accuracy in early visual cortex was remarkably similar in blind and sighted individuals, with an increasing decoding accuracy gradient from foveal to peripheral regions. This suggests that the typical organization by eccentricity of early visual cortex develops for auditory feedback, even in the lifelong absence of vision. The same feedback to early visual cortex might support visual perception in the sighted [1] and drive the recruitment of this area for non-visual functions in blind individuals [2, 3]

    The Hierarchical Structure of the Face Network Revealed by Its Functional Connectivity Pattern

    No full text
    A major principle of human brain organization is “integrating” some regions into networks while “segregating” other sets of regions into separate networks. However, little is known about the cognitive function of the integration and segregation of brain networks. Here, we examined the well-studied brain network for face processing, and asked whether the integration and segregation of the face network (FN) are related to face recognition performance. To do so, we used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) and the between-network connectivity (BNC) of the FN. We found that 95.4% of voxels in the FN had a significantly stronger WNC than BNC, suggesting that the FN is a relatively encapsulated network. Importantly, individuals with a stronger WNC (i.e., integration) in the right fusiform face area were better at recognizing faces, whereas individuals with a weaker BNC (i.e., segregation) in the right occipital face area performed better in the face recognition tasks. In short, our study not only demonstrates the behavioral relevance of integration and segregation of the FN but also provides evidence supporting functional division of labor between the occipital face area and fusiform face area in the hierarchically organized FN

    Disentangling Representations of Object Shape and Object Category in Human Visual Cortex : The Animate-Inanimate Distinction

    Get PDF
    Objects belonging to different categories evoke reliably different fMRI activity patterns in human occipitotemporal cortex, with the most prominent distinction being that between animate and inanimate objects. An unresolved question is whether these categorical distinctions reflect category-associated visual properties of objects or whether they genuinely reflect object category. Here, we addressed this question by measuring fMRI responses to animate and inanimate objects that were closely matched for shape and low-level visual features. Univariate contrasts revealed animate- and inanimate-preferring regions in ventral and lateral temporal cortex even for individually matched object pairs (e.g., snake-rope). Using representational similarity analysis, we mapped out brain regions in which the pairwise dissimilarity of multivoxel activity patterns (neural dissimilarity) was predicted by the objects' pairwise visual dissimilarity and/or their categorical dissimilarity. Visual dissimilarity was measured as the time it took participants to find a unique target among identical distractors in three visual search experiments, where we separately quantified overall dissimilarity, outline dissimilarity, and texture dissimilarity. All three visual dissimilarity structures predicted neural dissimilarity in regions of visual cortex. Interestingly, these analyses revealed several clusters in which categorical dissimilarity predicted neural dissimilarity after regressing out visual dissimilarity. Together, these results suggest that the animate-inanimate organization of human visual cortex is not fully explained by differences in the characteristic shape or texture properties of animals and inanimate objects. Instead, representations of visual object properties and object category may coexist in more anterior parts of the visual system

    Linguistic experience acquisition for novel stimuli selectively activates the neural network of the visual word form area

    Get PDF
    The human ventral visual cortex is functionally organized into different domains that sensitively respond to different categories, such as words and objects. There is heated debate over what principle constrains the locations of those domains. Taking the visual word form area (VWFA) as an example, we tested whether the word preference in this area originates from the bottom-up processes related to word shape (the shape hypothesis) or top-down connectivity of higher-order language regions (the connectivity hypothesis). We trained subjects to associate identical, meaningless, non-word-like figures with high-level features of either words or objects. We found that the word-feature learning for the figures elicited the neural activation change in the VWFA, and learning performance effectively predicted the activation strength of this area after learning. Word-learning effects were also observed in other language areas (i.e., the left posterior superior temporal gyrus, postcentral gyrus, and supplementary motor area), with increased functional connectivity between the VWFA and the language regions. In contrast, object-feature learning was not associated with obvious activation changes in the language regions. These results indicate that high-level language features of stimuli can modulate the activation of the VWFA, providing supportive evidence for the connectivity hypothesis of words processing in the ventral occipitotemporal cortex

    Neural Correlates of Repetition Priming: Changes in fMRI Activation and Synchrony

    Get PDF
    The neural mechanisms of behavioural priming remain unclear. Recent studies have suggested that category-preferential regions in ventral occipitotemporal cortex (VOTC) play an important role; some have reported increased neural synchrony between prefrontal cortex and temporal cortex associated with stimulus repetition. Based on these results, I hypothesized that increased neural synchrony, as measured by functional connectivity analysis using functional MRI, between category-preferential regions in VOTC and broader category-related networks would underlie behavioural priming. To test this hypothesis, I localized several category-preferential regions in VOTC using an independent functional localizer. Then, Seed Partial Least Squares was used to assess task-related functional connectivity of these regions across repetition of stimuli from multiple categories during an independent semantic classification task. While the results did not show the hypothesized differences in functional connectivity across stimulus repetition, evidence of category-specificity of neural priming and novel insights about the nature of category-related organization of VOTC were revealed

    The neural coding of properties shared by faces, bodies and objects

    Get PDF
    Previous studies have identified relatively separated regions of the brain that respond strongly when participants view images of either faces, bodies or objects. The aim of this thesis was to investigate how and where in the brain shared properties of faces, bodies and objects are processed. We selected three properties that are shared by faces and bodies, shared categories (sex and weight), shared identity and shared orientation (i.e. facing direction). We also investigated one property shared by faces and objects, the tendency to process a face or object as a whole rather than by its parts, which is known as holistic processing. We hypothesized that these shared properties might be encoded separately for faces, bodies and objects in the previously defined domain-specific regions, or alternatively that they might be encoded in an overlapping or shared code in those or other regions. In all of studies in this thesis, we used fMRI to record the brain activity of participants viewing images of faces and bodies or objects that showed differences in the shared properties of interest. We then investigated the neural responses these stimuli elicited in a variety of specifically localized brain regions responsive to faces, bodies or objects, as well as across the whole-brain. Our results showed evidence for a mix of overlapping coding, shared coding and domain-specific coding, depending on the particular property and the level of abstraction of its neural coding. We found we could decode face and body categories, identities and orientations from both face- and body-responsive regions showing that these properties are encoded in overlapping brain regions. We also found that non-domain specific brain regions are involved in holistic face processing. We identified shared coding of orientation and weight in the occipital cortex and shared coding of identity in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, demonstrating that a variety of brain regions combine face and body information into a common code. In contrast to these findings, we found evidence that high-level visual transformations may be predominantly processed in domain-specific regions, as we could most consistently decode body categories across image-size and body identity across viewpoint from body-responsive regions. In conclusion, this thesis furthers our understanding of the neural coding of face, body and object properties and gives new insights into the functional organisation of occipitotemporal cortex

    Mental Imagery Follows Similar Cortical Reorganization as Perception: Intra-Modal and Cross-Modal Plasticity in Congenitally Blind

    Get PDF
    Cortical plasticity in congenitally blind individuals leads to cross-modal activation of the visual cortex and may lead to superior perceptual processing in the intact sensory domains. Although mental imagery is often defined as a quasi-perceptual experience, it is unknown whether it follows similar cortical reorganization as perception in blind individuals. In this study, we show that auditory versus tactile perception evokes similar intra-modal discriminative patterns in congenitally blind compared with sighted participants. These results indicate that cortical plasticity following visual deprivation does not influence broad intra-modal organization of auditory and tactile perception as measured by our task. Furthermore, not only the blind, but also the sighted participants showed cross-modal discriminative patterns for perception modality in the visual cortex. During mental imagery, both groups showed similar decoding accuracies for imagery modality in the intra-modal primary sensory cortices. However, no cross-modal discriminative information for imagery modality was found in early visual cortex of blind participants, in contrast to the sighted participants. We did find evidence of cross-modal activation of higher visual areas in blind participants, including the representation of specific-imagined auditory features in visual area V4

    How the brain grasps tools: fMRI & motion-capture investigations

    Get PDF
    Humans’ ability to learn about and use tools is considered a defining feature of our species, with most related neuroimaging investigations involving proxy 2D picture viewing tasks. Using a novel tool grasping paradigm across three experiments, participants grasped 3D-printed tools (e.g., a knife) in ways that were considered to be typical (i.e., by the handle) or atypical (i.e., by the blade) for subsequent use. As a control, participants also performed grasps in corresponding directions on a series of 3D-printed non-tool objects, matched for properties including elongation and object size. Project 1 paired a powerful fMRI block-design with visual localiser Region of Interest (ROI) and searchlight Multivoxel Pattern Analysis (MVPA) approaches. Most remarkably, ROI MVPA revealed that hand-selective, but not anatomically overlapping tool-selective, areas of the left Lateral Occipital Temporal Cortex and Intraparietal Sulcus represented the typicality of tool grasping. Searchlight MVPA found similar evidence within left anterior temporal cortex as well as right parietal and temporal areas. Project 2 measured hand kinematics using motion-capture during a highly similar procedure, finding hallmark grip scaling effects despite the unnatural task demands. Further, slower movements were observed when grasping tools, relative to non-tools, with grip scaling also being poorer for atypical tool, compared to non-tool, grasping. Project 3 used a slow-event related fMRI design to investigate whether representations of typicality were detectable during motor planning, but MVPA was largely unsuccessful, presumably due to a lack of statistical power. Taken together, the representations of typicality identified within areas of the ventral and dorsal, but not ventro-dorsal, pathways have implications for specific predictions made by leading theories about the neural regions supporting human tool-use, including dual visual stream theory and the two-action systems model
    corecore