1,072 research outputs found

    Laminar fMRI: applications for cognitive neuroscience

    Get PDF
    The cortex is a massively recurrent network, characterized by feedforward and feedback connections between brain areas as well as lateral connections within an area. Feedforward, horizontal and feedback responses largely activate separate layers of a cortical unit, meaning they can be dissociated by lamina-resolved neurophysiological techniques. Such techniques are invasive and are therefore rarely used in humans. However, recent developments in high spatial resolution fMRI allow for non-invasive, in vivo measurements of brain responses specific to separate cortical layers. This provides an important opportunity to dissociate between feedforward and feedback brain responses, and investigate communication between brain areas at a more fine- grained level than previously possible in the human species. In this review, we highlight recent studies that successfully used laminar fMRI to isolate layer-specific feedback responses in human sensory cortex. In addition, we review several areas of cognitive neuroscience that stand to benefit from this new technological development, highlighting contemporary hypotheses that yield testable predictions for laminar fMRI. We hope to encourage researchers with the opportunity to embrace this development in fMRI research, as we expect that many future advancements in our current understanding of human brain function will be gained from measuring lamina-specific brain responses

    Spatial scale and distribution of neurovascular signals underlying decoding of orientation and eye of origin from fMRI data

    Get PDF
    Multivariate pattern analysis of functional magnetic resonance imaging (fMRI) data is widely used, yet the spatial scales and origin of neurovascular signals underlying such analyses remain unclear. We compared decoding performance for stimulus orientation and eye of origin from fMRI measurements in human visual cortex with predictions based on the columnar organization of each feature and estimated the spatial scales of patterns driving decoding. Both orientation and eye of origin could be decoded significantly above chance in early visual areas (V1–V3). Contrary to predictions based on a columnar origin of response biases, decoding performance for eye of origin in V2 and V3 was not significantly lower than that in V1, nor did decoding performance for orientation and eye of origin differ significantly. Instead, response biases for both features showed large-scale organization, evident as a radial bias for orientation, and a nasotemporal bias for eye preference. To determine whether these patterns could drive classification, we quantified the effect on classification performance of binning voxels according to visual field position. Consistent with large-scale biases driving classification, binning by polar angle yielded significantly better decoding performance for orientation than random binning in V1–V3. Similarly, binning by hemifield significantly improved decoding performance for eye of origin. Patterns of orientation and eye preference bias in V2 and V3 showed a substantial degree of spatial correlation with the corresponding patterns in V1, suggesting that response biases in these areas originate in V1. Together, these findings indicate that multivariate classification results need not reflect the underlying columnar organization of neuronal response selectivities in early visual areas. NEW & NOTEWORTHY Large-scale response biases can account for decoding of orientation and eye of origin in human early visual areas V1–V3. For eye of origin this pattern is a nasotemporal bias; for orientation it is a radial bias. Differences in decoding performance across areas and stimulus features are not well predicted by differences in columnar-scale organization of each feature. Large-scale biases in extrastriate areas are spatially correlated with those in V1, suggesting biases originate in primary visual cortex

    Cortical depth dependent functional responses in humans at 7T: improved specificity with 3D GRASE

    Get PDF
    Ultra high fields (7T and above) allow functional imaging with high contrast-to-noise ratios and improved spatial resolution. This, along with improved hardware and imaging techniques, allow investigating columnar and laminar functional responses. Using gradient-echo (GE) (T2* weighted) based sequences, layer specific responses have been recorded from human (and animal) primary visual areas. However, their increased sensitivity to large surface veins potentially clouds detecting and interpreting layer specific responses. Conversely, spin-echo (SE) (T2 weighted) sequences are less sensitive to large veins and have been used to map cortical columns in humans. T2 weighted 3D GRASE with inner volume selection provides high isotropic resolution over extended volumes, overcoming some of the many technical limitations of conventional 2D SE-EPI, whereby making layer specific investigations feasible. Further, the demonstration of columnar level specificity with 3D GRASE, despite contributions from both stimulated echoes and conventional T2 contrast, has made it an attractive alternative over 2D SE-EPI. Here, we assess the spatial specificity of cortical depth dependent 3D GRASE functional responses in human V1 and hMT by comparing it to GE responses. In doing so we demonstrate that 3D GRASE is less sensitive to contributions from large veins in superficial layers, while showing increased specificity (functional tuning) throughout the cortex compared to GE

    Second order scattering descriptors predict fMRI activity due to visual textures

    Get PDF
    Second layer scattering descriptors are known to provide good classification performance on natural quasi-stationary processes such as visual textures due to their sensitivity to higher order moments and continuity with respect to small deformations. In a functional Magnetic Resonance Imaging (fMRI) experiment we present visual textures to subjects and evaluate the predictive power of these descriptors with respect to the predictive power of simple contour energy - the first scattering layer. We are able to conclude not only that invariant second layer scattering coefficients better encode voxel activity, but also that well predicted voxels need not necessarily lie in known retinotopic regions.Comment: 3nd International Workshop on Pattern Recognition in NeuroImaging (2013

    Attention gates visual coding in the human pulvinar.

    Get PDF
    The pulvinar nucleus of the thalamus is suspected to have an important role in visual attention, based on its widespread connectivity with the visual cortex and the fronto-parietal attention network. However, at present, there remain many hypotheses on the pulvinars specific function, with sparse or conflicting evidence for each. Here we characterize how the human pulvinar encodes attended and ignored objects when they appear simultaneously and compete for attentional resources. Using multivoxel pattern analyses on data from two functional magnetic resonance imaging (fMRI) experiments, we show that attention gates both position and orientation information in the pulvinar: attended objects are encoded with high precision, while there is no measurable encoding of ignored objects. These data support a role of the pulvinar in distractor filtering--suppressing information from competing stimuli to isolate behaviourally relevant objects

    Studying feature specific mechanisms of the human visual system

    Get PDF
    What are the current limits of our knowledge of brain activity underlying vision and can I further this knowledge? In this thesis, I explore this basic question. I focus on those aspects of visual input that can be described as basic features of visual perception. Examples include orientation, color, direction of motion and spatial frequency. However, understanding how humans visually perceive the external world is closely related with the study of attention. Attention, that is, the selection of some aspects of the environment over others, is one of the most intensively studied areas in experimental psychology, yet its neural mechanisms remain largely elusive. This thesis focuses on three distinct topics at the border of feature specific visual perception and feature-specific visual attention. First, in a series of studies, I explore the influence of heightened attentional demand to a central task to feature-specific neural processing in the ignored periphery. I discover that heightened attentional demand does not influence feature-specific representations in early visual cortices. Second, I investigate the influence of feature-based attention on neural processing of early visual cortices. At the same time, I also probe the influence of a behavioral decision to deploy feature-specific attention in the imminent future. I find that feature-based attention operates independent of other types of attention. Additionally, results indicate that a behavioral decision to deploy feature-based attention alone, without visual stimulation present, is able to modulate neural activity in early visual cortices. Third, I examine the more complex feature of facial gender and where in the brain gender discrimination might receive neural processing. I find that, in an established network of face-selective brain areas, facial gender is represented in nearly all areas of that network. Finally, I discuss all findings in the light of the current state of research, for their scientific significance and for future research opportunities

    Reading the mind's eye: Decoding category information during mental imagery

    Get PDF
    Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventral–temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom–up visual input, for example, during visual imagery? Here, we first ask how well category information can be decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventral–temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable to within the imagery condition. The above results held even when we did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of “diagnostic voxels” (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventral–temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottom–up input, cortical back projections can selectively re-activate specific patterns of neural activity

    Direct evidence for encoding of motion streaks in human visual cortex

    No full text
    Temporal integration in the visual system causes fast-moving objects to generate static, oriented traces ('motion streaks'), which could be used to help judge direction of motion. While human psychophysics and single-unit studies in non-human primates are consistent with this hypothesis, direct neural evidence from the human cortex is still lacking. First, we provide psychophysical evidence that faster and slower motions are processed by distinct neural mechanisms: faster motion raised human perceptual thresholds for static orientations parallel to the direction of motion, whereas slower motion raised thresholds for orthogonal orientations. We then used functional magnetic resonance imaging to measure brain activity while human observers viewed either fast ('streaky') or slow random dot stimuli moving in different directions, or corresponding static-oriented stimuli. We found that local spatial patterns of brain activity in early retinotopic visual cortex reliably distinguished between static orientations. Critically, a multivariate pattern classifier trained on brain activity evoked by these static stimuli could then successfully distinguish the direction of fast ('streaky') but not slow motion. Thus, signals encoding static-oriented streak information are present in human early visual cortex when viewing fast motion. These experiments show that motion streaks are present in the human visual system for faster motion.This work was supported by the Wellcome Trust (G.R., D.S.S.), the European Union ‘Mindbridge’ project (B.B.), the Australian Federation of Graduate Women Tempe Mann Scholarship (D.A.), the University of Sydney Campbell Perry Travel Fellowship (D.A.) and the Brain Research Trust (C.K.)

    An interplay of feedforward and feedback signals supporting visual cognition

    Get PDF
    Vast majority of visual cognitive functions from low to high level rely not only on feedforward signals carrying sensory input to downstream brain areas but also on internally-generated feedback signals traversing the brain in the opposite direction. The feedback signals underlie our ability to conjure up internal representations regardless of sensory input – when imagining an object or directly perceiving it. Despite ubiquitous implications of feedback signals in visual cognition, little is known about their functional organization in the brain. Multiple studies have shown that within the visual system the same brain region can concurrently represent feedforward and feedback contents. Given this spatial overlap, (1) how does the visual brain separate feedforward and feedback signals thus avoiding a mixture of the perceived and the imagined? Confusing the two information streams could lead to potentially detrimental consequences. Another body of research demonstrated that feedback connections between two different sensory systems participate in a rapid and effortless signal transmission across them. (2) How do nonvisual signals elicit visual representations? In this work, we aimed to scrutinize the functional organization of directed signal transmission in the visual brain by interrogating these two critical questions. In Studies I and II, we explored the functional segregation of feedforward and feedback signals in grey matter depth of early visual area V1 using 7T fMRI. In Study III we investigated the mechanism of cross-modal generalization using EEG. In Study I, we hypothesized that functional segregation of external and internally-generated visual contents follows the organization of feedforward and feedback anatomical projections revealed in primate tracing anatomy studies: feedforward projections were found to terminate in the middle cortical layer of primate area V1, whereas feedback connections project to the superficial and deep layers. We used high-resolution layer-specific fMRI and multivariate pattern analysis to test this hypothesis in a mental rotation task. We found that rotated contents were predominant at outer cortical depth compartments (i.e. superficial and deep). At the same time perceived contents were more strongly represented at the middle cortical compartment. These results correspond to the previous neuroanatomical findings and identify how through cortical depth compartmentalization V1 functionally segregates rather than confuses external from internally-generated visual contents. For the more precise estimation of signal-by-depth separation revealed in Study I, next we benchmarked three MR-sequences at 7T - gradient-echo, spin-echo, and vascular space occupancy - in their ability to differentiate feedforward and feedback signals in V1. The experiment in Study II consisted of two complementary tasks: a perception task that predominantly evokes feedforward signals and a working memory task that relies on feedback signals. We used multivariate pattern analysis to read out the perceived (feedforward) and memorized (feedback) grating orientation from neural signals across cortical depth. Analyses across all the MR-sequences revealed perception signals predominantly in the middle cortical compartment of area V1 and working memory signals in the deep compartment. Despite an overall consistency across sequences, spin-echo was the only sequence where both feedforward and feedback information were differently pronounced across cortical depth in a statistically robust way. We therefore suggest that in the context of a typical cognitive neuroscience experiment manipulating feedforward and feedback signals at 7T fMRI, spin-echo method may provide a favorable trade-off between spatial specificity and signal sensitivity. In Study III we focused on the second critical question - how are visual representations activated by signals belonging to another sensory modality? Here we built our hypothesis following the studies in the field of object recognition, which demonstrate that abstract category-level representations emerge in the brain after a brief stimuli presentation in the absence of any explicit categorization task. Based on these findings we assumed that two sensory systems can reach a modality-independent representational state providing a universal feature space which can be read out by both sensory systems. We used EEG and a paradigm in which participants were presented with images and spoken words while they were conducting an unrelated task. We aimed to explore whether categorical object representations in both modalities reflect a convergence towards modality-independent representations. We obtained robust representations of objects and object categories in visual and auditory modalities; however, we did not find a conceptual representation shared across modalities at the level of patterns extracted from EEG scalp electrodes in our study. Overall, our results show that feedforward and feedback signals are spatially segregated in the grey matter depth, possibly reflecting a general strategy for implementation of multiple cognitive functions within the same brain region. This differentiation can be revealed with diverse MR-sequences at 7T fMRI, where spin-echo sequence could be particularly suitable for establishing cortical depth-specific effects in humans. We did not find modality-independent representations which, according to our hypothesis, may subserve the activation of visual representations by the signals from another sensory system. This pattern of results indicates that identifying the mechanisms bridging different sensory systems is more challenging than exploring within-modality signal circuitry and this challenge requires further studies. With this, our results contribute to a large body of research interrogating how feedforward and feedback signals give rise to complex visual cognition
    corecore