257 research outputs found

    Alpha-frequency feedback to early visual cortex orchestrates coherent naturalistic vision

    Get PDF
    During naturalistic vision, the brain generates coherent percepts by integrating sensory inputs scattered across the visual field. Here, we asked whether this integration process is mediated by rhythmic cortical feedback. In electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) experiments, we experimentally manipulated integrative processing by changing the spatiotemporal coherence of naturalistic videos presented across visual hemifields. Our EEG data revealed that information about incoherent videos is coded in feedforward-related gamma activity while information about coherent videos is coded in feedback-related alpha activity, indicating that integration is indeed mediated by rhythmic activity. Our fMRI data identified scene-selective cortex and human middle temporal complex (hMT) as likely sources of this feedback. Analytically combining our EEG and fMRI data further revealed that feedback-related representations in the alpha band shape the earliest stages of visual processing in cortex. Together, our findings indicate that the construction of coherent visual experiences relies on cortical feedback rhythms that fully traverse the visual hierarchy

    High frequency oscillations as a correlate of visual perception

    Get PDF
    “NOTICE: this is the author’s version of a work that was accepted for publication in International journal of psychophysiology. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in International journal of psychophysiology , 79, 1, (2011) DOI 10.1016/j.ijpsycho.2010.07.004Peer reviewedPostprin

    Encoding of continuous perceptual choices in human early visual cortex

    Get PDF
    Introduction: Research on the neural mechanisms of perceptual decision-making has typically focused on simple categorical choices, say between two alternative motion directions. Studies on such discrete alternatives have often suggested that choices are encoded either in a motor-based or in an abstract, categorical format in regions beyond sensory cortex. Methods: In this study, we used motion stimuli that could vary anywhere between 0 degrees and 360 degrees to assess how the brain encodes choices for features that span the full sensory continuum. We employed a combination of neuroimaging and encoding models based on Gaussian process regression to assess how either stimuli or choices were encoded in brain responses. Results: We found that single-voxel tuning patterns could be used to reconstruct the trial-by-trial physical direction of motion as well as the participants' continuous choices. Importantly, these continuous choice signals were primarily observed in early visual areas. The tuning properties in this region generalized between choice encoding and stimulus encoding, even for reports that reflected pure guessing. Discussion: We found only little information related to the decision outcome in regions beyond visual cortex, such as parietal cortex, possibly because our task did not involve differential motor preparation. This could suggest that decisions for continuous stimuli take can place already in sensory brain regions, potentially using similar mechanisms to the sensory recruitment in visual working memory

    Human self-motion perception

    Get PDF

    Studying neural selectivity for motion using high-field fMRI

    Get PDF
    Functional magnetic resonance imaging (fMRI) offers a number of opportunities to non-invasively study the properties of the human visual system. Advances in scanner technology, particularly the development of high-field scanners, allow improvements in fMRI such as higher resolution and higher signal to noise ratio (SNR). We aimed to examine what these advances in scanner technology, combined with novel analysis techniques, can tell us about the processing of motion stimuli in the human visual cortex. In Chapter 3 we investigated whether high-resolution fMRI allows us to directly study motion-selective responses in MT+. We used event-related and adaptation methods to examine selectivity for coherent motion and selectivity for direction of motion, and examined the potential limitations of these techniques. One particular analysis technique that has been developed in recent years uses multivariate methods to classify patterns of activity from visual cortex. In Chapter 4 we investigated these methods for classifying direction of motion, particularly whether successful classification responses are based on fine-scale information such as the arrangement of direction-selective columns, or a global signal at a coarser scale. In Chapter 5 we investigated multivariate classification of non-translational motion (e.g. rotation) to see how this compared to the classification of translational motion. The processing of such stimuli have been suggested to be free from the large-scale signals that may be involved in other stimuli, and therefore a more powerful tool for studying the neural architecture of visual cortex. Chapter 6 investigated the processing of plaid motion stimuli, specifically ’pattern’ motion selectivity in MT+ as opposed to ’component’ motion selectivity. These experiments highlight the usefulness of multivariate methods even if the scale of the signal is unknown

    Neural Coding of Real and Implied Motion in the Human Brain

    Get PDF
    Perceiving and processing visual motion is crucial for all animals, including humans. Brain regions in the human brain that are responsive to real motion have been extensively studied with different neuroimaging methods. However, the neural codes that are related to real motion have been primarily addressed using highly reductionist and mostly artificial motion stimuli, mostly using so-called random dot kinematograms. Studies using more natural forms of motion that the brain evolved and developed to deal with are comparably rare. Moreover, real, physical motion is not the only type of stimulus that induces motion perception in humans. Implied motion stimuli also induce motion perception although the stimuli do not carry physical motion information. Implied motion stimuli are for example still images containing a snap-shot of an object in motion. Various contextual cues mediate the percept of motion, including the context of the object in its background, and in particular the object composition and its axial position in the image that mediate both, the impression of implied motion as well as its direction. This means that at the neural level, object processing must be used to generate the implied motion percept. The work described in this thesis investigated the neural coding of real and implied motion in the human brain. The investigation was done using functional brain imaging of human adults and data were collected with a 3-Tesla MRI scanner while the participants viewed a variety of distinct visual stimuli. The visual stimuli contained directional real and implied motion and were created specifically for this study. For real motion stimuli, the aim of was to engage a maximal number of directionally selective units, in order to maximize the overlap to the subset of units potentially involved in coding implied motion. Hence, real motion stimuli were created such that the static component frames had natural image statistics (known to activate neurons more effectively) by using Fourier-scrambled natural images, and motion was presented at a wide range of motion velocities. Similarly, implied motion stimuli were derived from photographs of natural scenes. They were created by placing objects such as airplanes, birds, cars, or snapshots of walking humans on a set of contextual background images such as skylines or streets. For both, real motion and implied motion, stimuli for four directions were created: forwards and backwards,and left- and rightwards

    Direct evidence for encoding of motion streaks in human visual cortex

    No full text
    Temporal integration in the visual system causes fast-moving objects to generate static, oriented traces ('motion streaks'), which could be used to help judge direction of motion. While human psychophysics and single-unit studies in non-human primates are consistent with this hypothesis, direct neural evidence from the human cortex is still lacking. First, we provide psychophysical evidence that faster and slower motions are processed by distinct neural mechanisms: faster motion raised human perceptual thresholds for static orientations parallel to the direction of motion, whereas slower motion raised thresholds for orthogonal orientations. We then used functional magnetic resonance imaging to measure brain activity while human observers viewed either fast ('streaky') or slow random dot stimuli moving in different directions, or corresponding static-oriented stimuli. We found that local spatial patterns of brain activity in early retinotopic visual cortex reliably distinguished between static orientations. Critically, a multivariate pattern classifier trained on brain activity evoked by these static stimuli could then successfully distinguish the direction of fast ('streaky') but not slow motion. Thus, signals encoding static-oriented streak information are present in human early visual cortex when viewing fast motion. These experiments show that motion streaks are present in the human visual system for faster motion.This work was supported by the Wellcome Trust (G.R., D.S.S.), the European Union ‘Mindbridge’ project (B.B.), the Australian Federation of Graduate Women Tempe Mann Scholarship (D.A.), the University of Sydney Campbell Perry Travel Fellowship (D.A.) and the Brain Research Trust (C.K.)

    Studying neural selectivity for motion using high-field fMRI

    Get PDF
    Functional magnetic resonance imaging (fMRI) offers a number of opportunities to non-invasively study the properties of the human visual system. Advances in scanner technology, particularly the development of high-field scanners, allow improvements in fMRI such as higher resolution and higher signal to noise ratio (SNR). We aimed to examine what these advances in scanner technology, combined with novel analysis techniques, can tell us about the processing of motion stimuli in the human visual cortex. In Chapter 3 we investigated whether high-resolution fMRI allows us to directly study motion-selective responses in MT+. We used event-related and adaptation methods to examine selectivity for coherent motion and selectivity for direction of motion, and examined the potential limitations of these techniques. One particular analysis technique that has been developed in recent years uses multivariate methods to classify patterns of activity from visual cortex. In Chapter 4 we investigated these methods for classifying direction of motion, particularly whether successful classification responses are based on fine-scale information such as the arrangement of direction-selective columns, or a global signal at a coarser scale. In Chapter 5 we investigated multivariate classification of non-translational motion (e.g. rotation) to see how this compared to the classification of translational motion. The processing of such stimuli have been suggested to be free from the large-scale signals that may be involved in other stimuli, and therefore a more powerful tool for studying the neural architecture of visual cortex. Chapter 6 investigated the processing of plaid motion stimuli, specifically ’pattern’ motion selectivity in MT+ as opposed to ’component’ motion selectivity. These experiments highlight the usefulness of multivariate methods even if the scale of the signal is unknown

    Rats spontaneously perceive global motion direction of drifting plaids

    Get PDF
    Computing global motion direction of extended visual objects is a hallmark of primate high-level vision. Although neurons selective for global motion have also been found in mouse visual cortex, it remains unknown whether rodents can combine multiple motion signals into global, integrated percepts. To address this question, we trained two groups of rats to discriminate either gratings (G group) or plaids (i.e., superpositions of gratings with different orientations; P group) drifting horizontally along opposite directions. After the animals learned the task, we applied a visual priming paradigm, where presentation of the target stimulus was preceded by the brief presentation of either a grating or a plaid. The extent to which rat responses to the targets were biased by such prime stimuli provided a measure of the spontaneous, perceived similarity between primes and targets. We found that gratings and plaids, when uses as primes, were equally effective at biasing the perception of plaid direction for the rats of the P group. Conversely, for G group, only the gratings acted as effective prime stimuli, while the plaids failed to alter the perception of grating direction. To interpret these observations, we simulated a decision neuron reading out the representations of gratings and plaids, as conveyed by populations of either component or pattern cells (i.e., local or global motion detectors). We concluded that the findings for the P group are highly consistent with the existence of a population of pattern cells, playing a functional role similar to that demonstrated in primates. We also explored different scenarios that could explain the failure of the plaid stimuli to elicit a sizable priming magnitude for the G group. These simulations yielded testable predictions about the properties of motion representations in rodent visual cortex at the single-cell and circuitry level, thus paving the way to future neurophysiology experiments
    corecore