1,104 research outputs found
Adaptation to implied tilt: extensive spatial extrapolation of orientation gradients
To extract the global structure of an image, the visual system must integrate local orientation estimates across space. Progress is being made toward understanding this integration process, but very little is known about whether the presence of structure exerts a reciprocal influence on local orientation coding. We have previously shown that adaptation to patterns containing circular or radial structure induces tilt-aftereffects (TAEs), even in locations where the adapting pattern was occluded. These spatially âremoteâ TAEs have novel tuning properties and behave in a manner consistent with adaptation to the local orientation implied by the circular structure (but not physically present) at a given test location. Here, by manipulating the spatial distribution of local elements in noisy circular textures, we demonstrate that remote TAEs are driven by the extrapolation of orientation structure over remarkably large regions of visual space (more than 20°). We further show that these effects are not specific to adapting stimuli with polar orientation structure, but require a gradient of orientation change across space. Our results suggest that mechanisms of visual adaptation exploit orientation gradients to predict the local pattern content of unfilled regions of space
Visual motion integration is mediated by directional ambiguities in local motion signals
The output of primary visual cortex (V1) is a piecemeal representation of the visual scene and the response of any one cell cannot unambiguously guide sensorimotor behavior. It remains unsolved how subsequent stages of cortical processing combine (âpoolâ) these early visual signals into a coherent representation. We (Webb et al., 2007, 2011) have shown that responses of human observers on a pooling task employing broadband, random dot motion can be accurately predicted by decoding the maximum likelihood direction from a population of motion-sensitive neurons. Whereas Amano et al. (2009) found that the vector average velocity of arrays of narrowband, two-dimensional (2-d) plaids predicts perceived global motion. To reconcile these different results, we designed two experiments in which we used 2-d noise textures moving behind spatially distributed apertures and measured the point of subjective equality between pairs of global noise textures. Textures in the standard stimulus moved rigidly in the same direction, whereas their directions in the comparison stimulus were sampled from a set of probability distributions. Human observers judged which noise texture had a more clockwise (CW) global direction. In agreement with Amano and colleagues, observers' perceived global motion coincided with the vector average stimulus direction. To test if directional ambiguities in local motion signals governed perceived global direction, we manipulated the fidelity of the texture motion within each aperture. A proportion of the apertures contained texture that underwent rigid translation and the remainder contained dynamic (temporally uncorrelated) noise to create locally ambiguous motion. Perceived global motion matched the vector average when the majority of apertures contained rigid motion, but with increasing levels of dynamic noise shifted toward the maximum likelihood direction. A class of population decoders utilizing power-law non-linearities can accommodate this flexible pooling
Criterion-free measurement of motion transparency perception at different speeds
Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an ââoddone-out,ââ three alternative forced-choice procedure. Two intervals contained the standardâa random-dotkinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparisonâspeeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception
Perceptual learning reconfigures the effects of visual adaptation
Our sensory experiences over a range of different timescales shape our perception of the environment. Two particularly striking short-term forms of plasticity with manifestly different time courses and perceptual consequences are those caused by visual adaptation and perceptual learning. Although conventionally treated as distinct forms of experience-dependent plasticity, their neural mechanisms and perceptual consequences have become increasingly blurred, raising the possibility that they might interact. To optimize our chances of finding a functionally meaningful interaction between learning and adaptation, we examined in humans the perceptual consequences of learning a fine discrimination task while adapting the neurons that carry most information for performing this task. Learning improved discriminative accuracy to a level that ultimately surpassed that in an unadapted state. This remarkable improvement came at a price: adapting directions that before learning had little effect elevated discrimination thresholds afterward. The improvements in discriminative accuracy grew quickly and surpassed unadapted levels within the first few training sessions, whereas the deterioration in discriminative accuracy had a different time course. This learned reconfiguration of adapted discriminative accuracy occurred without a concomitant change to the characteristic perceptual biases induced by adaptation, suggesting that the system was still in an adapted state. Our results point to a functionally meaningful pushâpull interaction between learning and adaptation in which a gain in sensitivity in one adapted state is balanced by a loss of sensitivity in other adapted states
The effect of normal aging and age-related macular degeneration on perceptual learning
We investigated whether perceptual learning could be used to improve peripheral word identification speed. The relationship between the magnitude of learning and age was established in normal participants to determine whether perceptual learning effects are age invariant. We then investigated whether training could lead to improvements in patients with age-related macular degeneration (AMD). Twenty-eight participants with normal vision and five participants with AMD trained on a word identification task. They were required to identify three-letter words, presented 10° from fixation. To standardize crowding across each of the letters that made up the word, words were flanked laterally by randomly chosen letters. Word identification performance was measured psychophysically using a staircase procedure. Significant improvements in peripheral word identification speed were demonstrated following training (71% ± 18%). Initial task performance was correlated with age, with older participants having poorer performance. However, older adults learned more rapidly such that, following training, they reached the same level of performance as their younger counterparts. As a function of number of trials completed, patients with AMD learned at an equivalent rate as age-matched participants with normal vision. Improvements in word identification speed were maintained at least 6 months after training. We have demonstrated that temporal aspects of word recognition can be improved in peripheral vision with training across a range of ages and these learned improvements are relatively enduring. However, training targeted at other bottlenecks to peripheral reading ability, such as visual crowding, may need to be incorporated to optimize this approach
Perceptual learning reduces crowding in amblyopia and in the normal periphery
Amblyopia is a developmental visual disorder of cortical origin, characterized by crowding and poor acuity in central vision of the affected eye. Crowding refers to the adverse effects of surrounding items on object identification, common only in normal peripheral but not central vision. We trained a group of adult human amblyopes on a crowded letter identification task to assess whether the crowding problem can be ameliorated. Letter size was fixed well above the acuity limit, and letter spacing was varied to obtain spacing thresholds for central target identification. Normally sighted observers practiced the same task in their lower peripheral visual field. Independent measures of acuity were taken in flanked and unflanked conditions before and after training to measure crowding ratios at three fixed letter separations. Practice improved the letter spacing thresholds of both groups on the training task, and crowding ratios were reduced after posttest. The reductions in crowding in amblyopes were associated with improvements in standard measures of visual acuity. Thus, perceptual learning reduced the deleterious effects of crowding in amblyopia and in the normal periphery. The results support the effectiveness of plasticity-based approaches for improving vision in adult amblyopes and suggest experience-dependent effects on the cortical substrates of crowding
Linking Multi-Modal MRI to Clinical Measures of Visual Field Loss After Stroke
Loss of vision across large parts of the visual field is a common and devastating complication of cerebral strokes. In the clinic, this loss is quantified by measuring the sensitivity threshold across the field of vision using static perimetry. These methods rely on the ability of the patient to report the presence of lights in particular locations. While perimetry provides important information about the intactness of the visual field, the approach has some shortcomings. For example, it cannot distinguish where in the visual pathway the key processing deficit is located. In contrast, brain imaging can provide important information about anatomy, connectivity, and function of the visual pathway following stroke. In particular, functional magnetic resonance imaging (fMRI) and analysis of population receptive fields (pRF) can reveal mismatches between clinical perimetry and maps of cortical areas that still respond to visual stimuli after stroke (Papanikolaou et al., 2014). Here, we demonstrate how information from different brain imaging modalities-visual field maps derived from fMRI, lesion definitions from anatomical scans, and white matter tracts from diffusion weighted MRI data-provides a more complete picture of vision loss. For any given location in the visual field, the combination of anatomical and functional information can help identify whether vision loss is due to absence of gray matter tissue or likely due to white matter disconnection from other cortical areas. We present a combined imaging acquisition and visual stimulus protocol, together with a description of the analysis methodology, and apply it to datasets from four stroke survivors with homonymous field loss (two with hemianopia, two with quadrantanopia). For researchers trying to understand recovery of vision after stroke and clinicians seeking to stratify patients into different treatment pathways, this approach combines multiple, convergent sources of data to characterize the extent of the stroke damage. We show that such an approach gives a more comprehensive measure of residual visual capacity-in two particular respects: which locations in the visual field should be targeted and what kind of visual attributes are most suited for rehabilitation
Position matching between the visual fields in strabismus
The misalignment of visual input in strabismus disrupts positional judgments.We measured positional accuracy in the extrafoveal visual field (18â78 eccentricity) of a large group of strabismic subjects and a normal control group to identify positional distortions associated with the direction of strabismus. Subjects performed a free localization task in which targets were matched in opposite hemifields whilst fixating on a central cross. The constant horizontal error of each response was taken as a measure of accuracy, in addition to radial and angular error. In monocular conditions, all stimuli were viewed by one eye; thus, the error reflected spatial bias. In dichoptic conditions, the targets were seen by separate eyes; thus, the error reflected the perceived stimulus shift produced by ocular misalignment in addition to spatial bias. In both viewing conditions, both groups showed reliable overand underestimations of visual field position, here termed a compression of response coordinates. The normal group showed compression in the left periphery, regardless of eye of stimulation. The strabismic group showed a visual field-specific compression that was clearly associated with direction of strabismus. The variation in perceived shift of strabismic subjects was largely accounted for by the biases present in monocular viewing, suggesting that binocular correspondence was uniform in the tested region. The asymmetric strabismic compression could not be reproduced in normal subjects through prism viewing, and its presence across viewing conditions suggests a hemifield-specific change in spatial coding induced by long-standing ocular misalignment
Distinct mechanisms govern recalibration to audio-visual discrepancies in remote and recent history
To maintain perceptual coherence, the brain corrects for discrepancies between the senses. If, for example, lights are consistently offset from sounds, representations of auditory space are remapped to reduce this error (spatial recalibration). While recalibration effects have been observed following both brief and prolonged periods of adaptation, the relative contribution of discrepancies occurring over these timescales is unknown. Here we show that distinct multisensory recalibration mechanisms operate in remote and recent history. To characterise the dynamics of this spatial recalibration, we adapted human participants to audio-visual discrepancies for different durations, from 32 to 256âseconds, and measured the aftereffects on perceived auditory location. Recalibration effects saturated rapidly but decayed slowly, suggesting a combination of transient and sustained adaptation mechanisms. When long-term adaptation to an audio-visual discrepancy was immediately followed by a brief period of de-adaptation to an opposing discrepancy, recalibration was initially cancelled but subsequently reappeared with further testing. These dynamics were best fit by a multiple-exponential model that monitored audio-visual discrepancies over distinct timescales. Recent and remote recalibration mechanisms enable the brain to balance rapid adaptive changes to transient discrepancies that should be quickly forgotten against slower adaptive changes to persistent discrepancies likely to be more permanent
Multiple spatial reference frames underpin perceptual recalibration to audio-visual discrepancies
In dynamic multisensory environments, the perceptual system corrects for discrepancies arising between modalities. For instance, in the ventriloquism aftereffect (VAE), spatial disparities introduced between visual and auditory stimuli lead to a perceptual recalibration of auditory space. Previous research has shown that the VAE is underpinned by multiple recalibration mechanisms tuned to different timescales, however it remains unclear whether these mechanisms use common or distinct spatial reference frames. Here we asked whether the VAE operates in eye- or head-centred reference frames across a range of adaptation timescales, from a few seconds to a few minutes. We developed a novel paradigm for selectively manipulating the contribution of eye- versus head-centred visual signals to the VAE by manipulating auditory locations relative to either the head orientation or the point of fixation. Consistent with previous research, we found both eye- and head-centred frames contributed to the VAE across all timescales. However, we found no evidence for an interaction between spatial reference frames and adaptation duration. Our results indicate that the VAE is underpinned by multiple spatial reference frames that are similarly leveraged by the underlying time-sensitive mechanisms
- âŠ