13 research outputs found

    An interplay of feedforward and feedback signals supporting visual cognition

    Get PDF
    Vast majority of visual cognitive functions from low to high level rely not only on feedforward signals carrying sensory input to downstream brain areas but also on internally-generated feedback signals traversing the brain in the opposite direction. The feedback signals underlie our ability to conjure up internal representations regardless of sensory input – when imagining an object or directly perceiving it. Despite ubiquitous implications of feedback signals in visual cognition, little is known about their functional organization in the brain. Multiple studies have shown that within the visual system the same brain region can concurrently represent feedforward and feedback contents. Given this spatial overlap, (1) how does the visual brain separate feedforward and feedback signals thus avoiding a mixture of the perceived and the imagined? Confusing the two information streams could lead to potentially detrimental consequences. Another body of research demonstrated that feedback connections between two different sensory systems participate in a rapid and effortless signal transmission across them. (2) How do nonvisual signals elicit visual representations? In this work, we aimed to scrutinize the functional organization of directed signal transmission in the visual brain by interrogating these two critical questions. In Studies I and II, we explored the functional segregation of feedforward and feedback signals in grey matter depth of early visual area V1 using 7T fMRI. In Study III we investigated the mechanism of cross-modal generalization using EEG. In Study I, we hypothesized that functional segregation of external and internally-generated visual contents follows the organization of feedforward and feedback anatomical projections revealed in primate tracing anatomy studies: feedforward projections were found to terminate in the middle cortical layer of primate area V1, whereas feedback connections project to the superficial and deep layers. We used high-resolution layer-specific fMRI and multivariate pattern analysis to test this hypothesis in a mental rotation task. We found that rotated contents were predominant at outer cortical depth compartments (i.e. superficial and deep). At the same time perceived contents were more strongly represented at the middle cortical compartment. These results correspond to the previous neuroanatomical findings and identify how through cortical depth compartmentalization V1 functionally segregates rather than confuses external from internally-generated visual contents. For the more precise estimation of signal-by-depth separation revealed in Study I, next we benchmarked three MR-sequences at 7T - gradient-echo, spin-echo, and vascular space occupancy - in their ability to differentiate feedforward and feedback signals in V1. The experiment in Study II consisted of two complementary tasks: a perception task that predominantly evokes feedforward signals and a working memory task that relies on feedback signals. We used multivariate pattern analysis to read out the perceived (feedforward) and memorized (feedback) grating orientation from neural signals across cortical depth. Analyses across all the MR-sequences revealed perception signals predominantly in the middle cortical compartment of area V1 and working memory signals in the deep compartment. Despite an overall consistency across sequences, spin-echo was the only sequence where both feedforward and feedback information were differently pronounced across cortical depth in a statistically robust way. We therefore suggest that in the context of a typical cognitive neuroscience experiment manipulating feedforward and feedback signals at 7T fMRI, spin-echo method may provide a favorable trade-off between spatial specificity and signal sensitivity. In Study III we focused on the second critical question - how are visual representations activated by signals belonging to another sensory modality? Here we built our hypothesis following the studies in the field of object recognition, which demonstrate that abstract category-level representations emerge in the brain after a brief stimuli presentation in the absence of any explicit categorization task. Based on these findings we assumed that two sensory systems can reach a modality-independent representational state providing a universal feature space which can be read out by both sensory systems. We used EEG and a paradigm in which participants were presented with images and spoken words while they were conducting an unrelated task. We aimed to explore whether categorical object representations in both modalities reflect a convergence towards modality-independent representations. We obtained robust representations of objects and object categories in visual and auditory modalities; however, we did not find a conceptual representation shared across modalities at the level of patterns extracted from EEG scalp electrodes in our study. Overall, our results show that feedforward and feedback signals are spatially segregated in the grey matter depth, possibly reflecting a general strategy for implementation of multiple cognitive functions within the same brain region. This differentiation can be revealed with diverse MR-sequences at 7T fMRI, where spin-echo sequence could be particularly suitable for establishing cortical depth-specific effects in humans. We did not find modality-independent representations which, according to our hypothesis, may subserve the activation of visual representations by the signals from another sensory system. This pattern of results indicates that identifying the mechanisms bridging different sensory systems is more challenging than exploring within-modality signal circuitry and this challenge requires further studies. With this, our results contribute to a large body of research interrogating how feedforward and feedback signals give rise to complex visual cognition

    Resolving the time course of visual and auditory object categorization

    Get PDF
    Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study, we used EEG (n = 48) and time-resolved multivariate pattern analysis to investigate 1) the time course with which object category information emerges in the auditory modality and 2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that 1) auditory object category representations can be reliably extracted from EEG signals and 2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects’ category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, there was no convergence toward conceptual modality-independent representations, thus providing no evidence for a shared supramodal code. NEW & NOTEWORTHY Object categorization operates on inputs from different sensory modalities, such as vision and audition. This process was mainly studied in vision. Here, we explore auditory object categorization. We show that auditory object category representations can be reliably extracted from EEG signals and, similar to vision, auditory representations initially carry information about individual objects, which is followed by a subsequent representation of the objects’ category membership

    Low-frequency oscillations track the contents of visual perception and mental imagery

    No full text

    Essential considerations for exploring visual working memory storage in the human brain

    No full text
    Visual working memory (VWM) relies on a distributed cortical network. Yet, the extent to which individual cortical areas, like early visual cortex and intraparietal sulcus, are essential to VWM storage remains debated. Here, we reanalyze key datasets from two independent labs to address three topics at the forefront of current-day VWM research: Resiliency of mnemonic representations against visual distraction, the role of attentional priority in memory, and brain–behavior relationships. By utilizing different analysis approaches, each designed to test different aspects of mnemonic coding, our results provide a comprehensive perspective on the role of early visual and intraparietal areas. We emphasize the importance of analysis choices, and how a thorough understanding of the principles they test is crucial for unraveling the distributed mechanisms of VWM. Consequently, we caution against the idea of a singular essential storage area, which could limit our comprehension of the VWM system

    Essential considerations for exploring visual working memory storage in the human brain

    No full text
    Visual working memory (VWM) relies on a distributed cortical network. Yet, the extent to which individual cortical areas, like early visual cortex and intraparietal sulcus, are essential to VWM storage remains debated. Here, we reanalyze key datasets from two independent labs to address three topics at the forefront of current-day VWM research: Resiliency of mnemonic representations against visual distraction, the role of attentional priority in memory, and brain–behavior relationships. By utilizing different analysis approaches, each designed to test different aspects of mnemonic coding, our results provide a comprehensive perspective on the role of early visual and intraparietal areas. We emphasize the importance of analysis choices, and how a thorough understanding of the principles they test is crucial for unraveling the distributed mechanisms of VWM. Consequently, we caution against the idea of a singular essential storage area, which could limit our comprehension of the VWM system

    Dataset and analysis script: A different kind of pain: affective valence of errors and incongruence

    No full text
    Dataset and analysis script from the paper Ivanchei et al. (2018) A different kind of pain: affective valence of errors and incongruence

    Data and script: Blame everyone: Error-related devaluation in Eriksen flanker task

    No full text
    Data and script from the paper You can use them or modify the data as you wish as long as you cite the original paper: Chetverikov, A., Iamshchinina, P., Begler, A., Ivanchei, I., Filippova, M., & Kuvaldina, M. (2017). Blame everyone: Error-related devaluation in Eriksen flanker task. Acta Psychologica, 180, 155-159. https://doi.org/10.1016/j.actpsy.2017.09.00

    Markers of error, conflict, and inhibition review table

    No full text
    File "Markers of error, conflict, and inhibition review table.xlsx" is a literature review for the project Attentional selection in the situation of cognitive conflict: dissociation between markers of error and error awareness. Project page at ResearchGate: https://www.researchgate.net/project/Attentional-selection-in-the-situation-of-cognitive-conflict-dissociation-between-markers-of-error-and-error-awareness. The "DATA" list contains structured review of 128 papers published from 1977 (one paper) to 2016. Description of the data, variables and license are at the "METADATA" list. If you have any questions please feel free to contact me through (you can find it at the metadata).<br

    Essential considerations for exploring visual working memory storage in the human brain

    No full text
    Visual working memory (VWM) relies on a distributed cortical network. Yet, the extent to which individual cortical areas, like early visual cortex and intraparietal sulcus, are essential to VWM storage remains debated. Here, we reanalyze key datasets from two independent labs to address three topics at the forefront of current-day VWM research: Resiliency of mnemonic representations against visual distraction, the role of attentional priority in memory, and brain–behavior relationships. By utilizing different analysis approaches, each designed to test different aspects of mnemonic coding, our results provide a comprehensive perspective on the role of early visual and intraparietal areas. We emphasize the importance of analysis choices, and how a thorough understanding of the principles they test is crucial for unraveling the distributed mechanisms of VWM. Consequently, we caution against the idea of a singular essential storage area, which could limit our comprehension of the VWM system
    corecore