183 research outputs found

    fMRI Analysis-by-Synthesis Reveals a Dorsal Hierarchy That Extracts Surface Slant.

    Get PDF
    The brain's skill in estimating the 3-D orientation of viewed surfaces supports a range of behaviors, from placing an object on a nearby table, to planning the best route when hill walking. This ability relies on integrating depth signals across extensive regions of space that exceed the receptive fields of early sensory neurons. Although hierarchical selection and pooling is central to understanding of the ventral visual pathway, the successive operations in the dorsal stream are poorly understood. Here we use computational modeling of human fMRI signals to probe the computations that extract 3-D surface orientation from binocular disparity. To understand how representations evolve across the hierarchy, we developed an inference approach using a series of generative models to explain the empirical fMRI data in different cortical areas. Specifically, we simulated the responses of candidate visual processing algorithms and tested how well they explained fMRI responses. Thereby we demonstrate a hierarchical refinement of visual representations moving from the representation of edges and figure-ground segmentation (V1, V2) to spatially extensive disparity gradients in V3A. We show that responses in V3A are little affected by low-level image covariates, and have a partial tolerance to the overall depth position. Finally, we show that responses in V3A parallel perceptual judgments of slant. This reveals a relatively short computational hierarchy that captures key information about the 3-D structure of nearby surfaces, and more generally demonstrates an analysis approach that may be of merit in a diverse range of brain imaging domains.This project was supported by the Wellcome Trust (095183/Z/ 10/Z) and the Japan Society for the Promotion of Science (H22.290 and KAKENHI 26870911).This is the final published version. It first appeared at http://www.jneurosci.org/content/35/27/9823

    Mapping the visual brain areas susceptible to phosphene induction through brain stimulation.

    Get PDF
    Transcranial magnetic stimulation (TMS) is a non-invasive brain stimulation technique whose effects on neural activity can be uncertain. Within the visual cortex, phosphenes are a useful marker of TMS: They indicate the induction of neural activation that propagates and creates a conscious percept. However, we currently do not know how susceptible different areas of the visual cortex are to TMS-induced phosphenes. In this study, we systematically map out locations in the visual cortex where stimulation triggered phosphenes. We relate this to the retinotopic organization and the location of object- and motion-selective areas, identified by functional magnetic resonance imaging (fMRI) measurements. Our results show that TMS can reliably induce phosphenes in early (V1, V2d, and V2v) and dorsal (V3d and V3a) visual areas close to the interhemispheric cleft. However, phosphenes are less likely in more lateral locations (hMT+/V5 and LOC). This suggests that early and dorsal visual areas are particularly amenable to TMS and that TMS can be used to probe the functional role of these areas.This study was funded by the European Community’s Seventh Framework Programme (FP7/2007-2013) under agreement PITN-GA-2011-290011 and the Welcome Trust (095183/Z/10/Z).This is the final version of the article. It first appeared from Springer via https://doi.org/10.1007/s00221-016-4784-

    fMRI Activity in Posterior Parietal Cortex Relates to the Perceptual Use of Binocular Disparity for Both Signal-In-Noise and Feature Difference Tasks.

    Get PDF
    Visually guided action and interaction depends on the brain's ability to (a) extract and (b) discriminate meaningful targets from complex retinal inputs. Binocular disparity is known to facilitate this process, and it is an open question how activity in different parts of the visual cortex relates to these fundamental visual abilities. Here we examined fMRI responses related to performance on two different tasks (signal-in-noise "coarse" and feature difference "fine" tasks) that have been widely used in previous work, and are believed to differentially target the visual processes of signal extraction and feature discrimination. We used multi-voxel pattern analysis to decode depth positions (near vs. far) from the fMRI activity evoked while participants were engaged in these tasks. To look for similarities between perceptual judgments and brain activity, we constructed 'fMR-metric' functions that described decoding performance as a function of signal magnitude. Thereafter we compared fMR-metric and psychometric functions, and report an association between judged depth and fMRI responses in the posterior parietal cortex during performance on both tasks. This highlights common stages of processing during perceptual performance on these tasks.This is the final version of the article. It first appeared from PLOS via http://dx.doi.org/10.1371/journal.pone.014069

    Perceptual learning of second order cues for layer decomposition.

    Get PDF
    Luminance variations are ambiguous: they can signal changes in surface reflectance or changes in illumination. Layer decomposition-the process of distinguishing between reflectance and illumination changes-is supported by a range of secondary cues including colour and texture. For an illuminated corrugated, textured surface the shading pattern comprises modulations of luminance (first order, LM) and local luminance amplitude (second-order, AM). The phase relationship between these two signals enables layer decomposition, predicts the perception of reflectance and illumination changes, and has been modelled based on early, fast, feed-forward visual processing (Schofield et al., 2010). However, while inexperienced viewers appreciate this scission at long presentation times, they cannot do so for short presentation durations (250 ms). This might suggest the action of slower, higher-level mechanisms. Here we consider how training attenuates this delay, and whether the resultant learning occurs at a perceptual level. We trained observers to discriminate the components of plaid stimuli that mixed in-phase and anti-phase LM/AM signals over a period of 5 days. After training, the strength of the AM signal needed to differentiate the plaid components fell dramatically, indicating learning. We tested for transfer of learning using stimuli with different spatial frequencies, in-plane orientations, and acutely angled plaids. We report that learning transfers only partially when the stimuli are changed, suggesting that benefits accrue from tuning specific mechanisms, rather than general interpretative processes. We suggest that the mechanisms which support layer decomposition using second-order cues are relatively early, and not inherently slow

    Adaptation to binocular anticorrelation results in increased neural excitability

    Get PDF
    Throughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Some neurons appear tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause, i.e., establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, some binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioural evidence supporting the existence of these neurons (Cumming & Parker, 1997; Janssen, Vogels, Liu, & Orban, 2003; Katyal, Vergeer, He, He, & Engel, 2018; Kingdom, Jennings, & Georgeson, 2018; Tsao, Conway, & Livingstone, 2003), their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers’ steady-state visually evoked potentials (SSVEPs) in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger SSVEPs, while adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting ‘what not’ neurons play a suppressive role in supporting stereopsis (Goncalves & Welchman, 2017); that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability.This work was supported by the Leverhulme Trust (ECF-2017-573 to R. R.), the Isaac Newton Trust (17.08(o) to R. R.), and the Wellcome Trust (095183/Z/10/Z to A. E. W. and 206495/Z/17/Z to E. M.)

    Brightness masking is modulated by disparity structure.

    Get PDF
    The luminance contrast at the borders of a surface strongly influences surface's apparent brightness, as demonstrated by a number of classic visual illusions. Such phenomena are compatible with a propagation mechanism believed to spread contrast information from borders to the interior. This process is disrupted by masking, where the perceived brightness of a target is reduced by the brief presentation of a mask (Paradiso & Nakayama, 1991), but the exact visual stage that this happens remains unclear. In the present study, we examined whether brightness masking occurs at a monocular-, or a binocular-level of the visual hierarchy. We used backward masking, whereby a briefly presented target stimulus is disrupted by a mask coming soon afterwards, to show that brightness masking is affected by binocular stages of the visual processing. We manipulated the 3-D configurations (slant direction) of the target and mask and measured the differential disruption that masking causes on brightness estimation. We found that the masking effect was weaker when stimuli had a different slant. We suggest that brightness masking is partly mediated by mid-level neuronal mechanisms, at a stage where binocular disparity edge structure has been extracted.This project was supported by fellowships to H.B. from the Japan Society for the Promotion of Science, JSPS KAKENHI (26870911) and A.E.W. from the Wellcome Trust (095183/Z/10/Z).This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.visres.2015.02.01

    Integration of texture and disparity cues to surface slant in dorsal visual cortex.

    Get PDF
    Reliable estimation of three-dimensional (3D) surface orientation is critical for recognizing and interacting with complex 3D objects in our environment. Human observers maximize the reliability of their estimates of surface slant by integrating multiple depth cues. Texture and binocular disparity are two such cues, but they are qualitatively very different. Existing evidence suggests that representations of surface tilt from each of these cues coincide at the single-neuron level in higher cortical areas. However, the cortical circuits responsible for 1) integration of such qualitatively distinct cues and 2) encoding the slant component of surface orientation have not been assessed. We tested for cortical responses related to slanted plane stimuli that were defined independently by texture, disparity, and combinations of these two cues. We analyzed the discriminability of functional MRI responses to two slant angles using multivariate pattern classification. Responses in visual area V3B/KO to stimuli containing congruent cues were more discriminable than those elicited by single cues, in line with predictions based on the fusion of slant estimates from component cues. This improvement was specific to congruent combinations of cues: incongruent cues yielded lower decoding accuracies, which suggests the robust use of individual cues in cases of large cue conflicts. These data suggest that area V3B/KO is intricately involved in the integration of qualitatively dissimilar depth cues
    • …
    corecore