145 research outputs found

    Luminance cues constrain chromatic blur discrimination in natural scene stimuli

    Get PDF
    Introducing blur into the color components of a natural scene has very little effect on its percept, whereas blur introduced into the luminance component is very noticeable. Here we quantify the dominance of luminance information in blur detection and examine a number of potential causes. We show that the interaction between chromatic and luminance information is not explained by reduced acuity or spatial resolution limitations for chromatic cues, the effective contrast of the luminance cue, or chromatic and achromatic statistical regularities in the images. Regardless of the quality of chromatic information, the visual system gives primacy to luminance signals when determining edge location. In natural viewing, luminance information appears to be specialized for detecting object boundaries while chromatic information may be used to determine surface properties

    The consequences of strabismus and the benefits of adult strabismus surgery

    Get PDF
    Strabismus has a negative impact on patientsā€™ lives regardless of their age. Factors such as self-esteem, relationships with others, education and the ability to find employment may all be negatively affected by strabismus. It is possible to correct strabismus in adulthood successfully; the chances of achieving good ocular alignment are high and the risks of intractable diplopia low. Successful surgery to realign the visual axes can improve visual function, and offer psychosocial benefits that ultimately improve quality of life. The potential benefits of strabismus surgery should be discussed with patients, regardless of their age or the age of onset of strabismus. This article reviews the impact of strabismus, focusing on the psychosocial consequences of the condition, of which many optometrists may be less aware

    Characterizing the role of disparity information in alleviating visual crowding

    Get PDF
    The ability to identify a target is reduced by the presence of nearby objects, a phenomenon known as visual crowding. The extent to which crowding impairs our perception is generally governed by the degree of similarity between a target stimulus and its surrounding flankers. Here we investigated the influence of disparity differences between target and flankers on crowding. Orientation discrimination thresholds for a parafoveal target were first measured when the target and flankers were presented at the same depth to establish a flanker separation that induced a significant elevation in threshold for each individual. Flankers were subsequently fixed at this spatial separation while the disparity of the flankers relative to the target was altered. For all participants, thresholds showed a systematic decrease as flanker-target disparity increased. The resulting tuning function was asymmetric: Crowding was lower when the target was perceived to be in front of the flankers rather than behind. A series of control experiments confirmed that these effects were driven by disparity, as opposed to other factors such as flanker-target separation in three-dimensional (3-D) space or monocular positional offsets used to create disparity. When flankers were distributed over a range of crossed and uncrossed disparities, such that the mean was in the plane of the target, there was an equivalent or greater release of crowding compared to when all flankers were presented at the maximum disparity of that range. Overall, our results suggest that depth cues can reduce the effects of visual crowding, and that this reduction is unlikely to be caused by grouping of flankers or positional shifts in the monocular image

    Decoding working memory of stimulus contrast in early visual cortex

    Get PDF
    Most studies of the early stages of visual analysis (V1-V3) have focused on the properties of neurons that support processing of elemental features of a visual stimulus or scene, such as local contrast, orientation, or direction of motion. Recent evidence from electrophysiology and neuroimaging studies, however, suggests that early visual cortex may also play a role in retaining stimulus representations in memory for short periods. For example, fMRI responses obtained during the delay period between two presentations of an oriented visual stimulus can be used to decode the remembered stimulus orientation with multivariate pattern analysis. Here, we investigated whether orientation is a special case or if this phenomenon generalizes to working memory traces of other visual features. We found that multivariate classification of fMRI signals from human visual cortex could be used to decode the contrast of a perceived stimulus even when the mean response changes were accounted for, suggesting some consistent spatial signal for contrast in these areas. Strikingly, we found that fMRI responses also supported decoding of contrast when the stimulus had to be remembered. Furthermore, classification generalized from perceived to remembered stimuli and vice versa, implying that the corresponding pattern of responses in early visual cortex were highly consistent. In additional analyses, we show that stimulus decoding here is driven by biases depending on stimulus eccentricity. This places important constraints on the interpretation for decoding stimulus properties for which cortical processing is known to vary with eccentricity, such as contrast, color, spatial frequency, and temporal frequency

    Cue combination of conflicting color and luminance edges

    Get PDF
    Abrupt changes in the color or luminance of a visual image potentially indicate object boundaries. Here, we consider how these cues to the visual “edge” location are combined when they conflict. We measured the extent to which localization of a compound edge can be predicted from a simple maximum likelihood estimation model using the reliability of chromatic (L−M) and luminance signals alone. Maximum likelihood estimation accurately predicted thepatternof results across a range of contrasts. Predictions consistently overestimated the relative influence of the luminance cue; although L−M is often considered a poor cue for localization, it was used more than expected. This need not indicate that the visual system is suboptimal but that its priors about which cue is moreusefulare not flat. This may be because, although strong changes in chromaticity typically represent object boundaries, changes in luminance can be caused by either a boundary or a shadow

    Asynchrony adaptation reveals neural population code for audio-visual timing

    Get PDF
    The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexibleā€”adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects

    The effect of normal aging and age-related macular degeneration on perceptual learning

    Get PDF
    We investigated whether perceptual learning could be used to improve peripheral word identification speed. The relationship between the magnitude of learning and age was established in normal participants to determine whether perceptual learning effects are age invariant. We then investigated whether training could lead to improvements in patients with age-related macular degeneration (AMD). Twenty-eight participants with normal vision and five participants with AMD trained on a word identification task. They were required to identify three-letter words, presented 10Ā° from fixation. To standardize crowding across each of the letters that made up the word, words were flanked laterally by randomly chosen letters. Word identification performance was measured psychophysically using a staircase procedure. Significant improvements in peripheral word identification speed were demonstrated following training (71% Ā± 18%). Initial task performance was correlated with age, with older participants having poorer performance. However, older adults learned more rapidly such that, following training, they reached the same level of performance as their younger counterparts. As a function of number of trials completed, patients with AMD learned at an equivalent rate as age-matched participants with normal vision. Improvements in word identification speed were maintained at least 6 months after training. We have demonstrated that temporal aspects of word recognition can be improved in peripheral vision with training across a range of ages and these learned improvements are relatively enduring. However, training targeted at other bottlenecks to peripheral reading ability, such as visual crowding, may need to be incorporated to optimize this approach

    Size-induced distortions in perceptual maps of visual space

    Get PDF
    In order to interact with our environment, the human brain constructs maps of visual space. The orderly mapping of external space across the retinal surface, termed retinotopy, is maintained at subsequent levels of visual cortical processing and underpins our capacity to make precise and reliable judgments about the relative location of objects around us. While these maps, at least in the visual system, support high precision judgments about the relative location of objects, they are prone to significant perceptual distortion. Here, we ask observers to estimate the separation of two visual stimuliVa spatial interval discrimination task. We show that large stimulus sizes require much greater separation in order to be perceived as having the same separation as small stimulus sizes. The relationship is linear, task independent, and unrelated to the perceived position of object edges. We also show that this type of spatial distortion is not restricted to the object itself but can also be revealed by changing the spatial scale of the background, while object size remains constant. These results indicate that fundamental spatial properties, such as retinal image size or the scale at which an object is analyzed, exert a marked influence on spatial coding

    The rapid emergence of stimulus specific perceptual learning

    Get PDF
    Is stimulus specific perceptual learning the result of extended practice or does it emerge early in the time course of learning? We examined this issue by manipulating the amount of practice given on a face identification task on Day 1, and altering the familiarity of stimuli on Day 2. We found that a small number of trials was sufficient to produce stimulus specific perceptual learning of faces: on Day 2, response accuracy decreased by the same amount for novel stimuli regardless of whether observers practiced 105 or 840 trials on Day 1. Current models of learning assume early procedural improvements followed by late stimulus specific gains. Our results show that stimulus specific and procedural improvements are distributed throughout the time course of learning
    • ā€¦
    corecore