42 research outputs found

    Individual variation in inter-ocular suppression and sensory eye dominance

    Get PDF
    The competitive and inhibitory interactions between the two eyes’ images are a pervasive aspect of binocular vision. Over the last decade, our understanding of the neural processes underpinning binocular rivalry (BR) and continuous flash suppression CFS) has increased substantially, but we still have little understanding of the relationship between these two effects and their variation in the general population. Studies that pool data across individuals and eyes risk masking substantial variations in binocular vision that exist in the general population. To investigate this issue we compared the depth of inter-ocular suppression evoked by BR with that elicited by CFS, in a group (N=25) of visually normal individuals. A noise pattern (either static for BR or dynamic for CFS) was presented to one eye and its suppressive influence on a probe grating presented simultaneously to the other eye was measured. We found substantial individual differences in the magnitude of suppression (a 10-fold variation in probe detection threshold) evoked by each task, but performance on BR was a significant predictor of performance on the CFS task. However many individuals showed marked asymmetries between the two eyes’ ability to detect a suppressed target, that were not necessarily the same for the two tasks. There was a tendency for the magnitude of the asymmetry to increase as the refresh rate of the dynamic noise increased. The results suggest a common underlying mechanism is likely to be responsible, at least in part, for driving inter-ocular suppression under BR and CFS. The marked asymmetries in inter-ocular suppression at higher noise refresh rates, may be indicative of a difference in temporal processing between the eyes

    The properties of the motion-detecting mechanisms mediating perceived direction in stochastic displays

    Get PDF
    AbstractPrevious studies [e.g. Baker & Hess, 1998. Vision Research, 38, 1211–1222] have shown that perceived direction in displays composed of multiple, limited-lifetime, Gabor micropatterns (G) is influenced by movement both at the fine spatial scale of the internal luminance modulation (first-order motion) and the coarse spatial scale of the Gaussian, contrast window (second-order motion). However it is presently indeterminate as to whether this pattern of results is indicative of the processes by which first-order and second-order motion signals interact within the visual system per se or those by which motion information, irrespective of how it is defined, is utilised across different spatial scales. To address this issue, and more generally the properties of the mechanisms that analyse motion in such displays, we employed stochastic motion sequences composed of either G, G added to a static carrier (G+C) or G multiplied with a carrier (G*C). Crucially G*C, unlike both G and G+C, micropatterns contain no net first-order motion and second-order motion only at the scale of the internal contrast modulation. For small displacements perceived direction in all cases showed a dependence on the internal sinusoidal spatial structure of the micropatterns and characteristic oscillations were typically observed, consistent with models in which first-order motion and second-order motion are encoded on the basis of similar low-level mechanisms. Importantly for larger displacements, and also when the internal spatial structure was randomised on successive exposures (so that motion at this spatial scale was unreliable), performance tended to be veridical for all types of micropattern, even though under these conditions displacements of the G*C micropatterns should have been invisible to current, low-level, motion-detecting schemes. This suggests that both low-level motion sensors and mechanisms utilising a different motion-detecting strategy such as high-level, attentive, feature-tracking may mediate perceptual judgements in stochastic displays

    Spatial frequency selective masking of first-order and second-order motion in the absence of off-frequency `looking'

    Get PDF
    AbstractConverging evidence suggests that, at least initially, first-order (luminance defined) and second-order (e.g. contrast defined) motion are processed independently in human vision. However, adaptation studies suggest that second-order motion, like first-order motion, may be encoded by spatial frequency selective mechanisms each operating over a limited range of scales. Nonetheless, the precise properties of these mechanisms are indeterminate since the spatial frequency selectivity of adaptation aftereffects may not necessarily represent the frequency tuning of the underlying units [Vision Research 37 (1997) 2685]. To address this issue we used visual masking to investigate the spatial-frequency tuning of the mechanisms that encode motion. A dual-masking paradigm was employed to derive estimates of the spatial tuning of motion sensors, in the absence of off-frequency `looking'. Modulation-depth thresholds for identifying the direction of a sinusoidal test pattern were measured over a 4-octave range (0.125–2 c/deg) in both the absence and presence of two counterphasing masks, simultaneously positioned above and below the test frequency. For second-order motion, the resulting masking functions were spatially bandpass in character and remained relatively invariant with changes in test spatial frequency, masking pattern modulation depth and the temporal properties of the noise carrier. As expected, bandpass spatial frequency tuning was also found for first-order motion. This provides compelling evidence that the mechanisms responsible for encoding each variety of motion exhibit spatial frequency selectivity. Thus, although first-order and second-order motion may be encoded independently, they must utilise similar computational principles

    Criterion-free measurement of motion transparency perception at different speeds

    Get PDF
    Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an ‘‘oddone-out,’’ three alternative forced-choice procedure. Two intervals contained the standard—a random-dotkinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception

    Visual perception in dyslexia is limited by sub-optimal scale selection

    Get PDF
    Readers with dyslexia are purported to have a selective visual impairment but the underlying nature of the deficit remains elusive. Here, we used a combination of behavioural psychophysics and biologically-motivated computational modeling to investigate if this deficit extends to object segmentation, a process implicated in visual word form recognition. Thirty-eight adults with a wide range of reading abilities were shown random-dot displays spatially divided into horizontal segments. Adjacent segments contained either local motion signals in opposing directions or analogous static form cues depicting orthogonal orientations. Participants had to discriminate these segmented patterns from stimuli containing identical motion or form cues that were spatially intermingled. Results showed participants were unable to perform the motion or form task reliably when segment size was smaller than a spatial resolution (acuity) limit that was independent of reading skill. Coherence thresholds decreased as segment size increased, but for the motion task the rate of improvement was shallower for readers with dyslexia and the segment size where performance became asymptotic was larger. This suggests that segmentation is impaired in readers with dyslexia but only on tasks containing motion information. We interpret these findings within a novel framework in which the mechanisms underlying scale selection are impaired in developmental dyslexia

    Encoding of rapid time-varying information is impaired in poor readers

    Get PDF
    A characteristic set of eye movements and fixations are made during reading, so the position of words on the retinae is constantly being updated. Effective decoding of print requires this temporal stream of visual information to be segmented or parsed into its constituent units (e.g., letters or words). Poor readers' difficulties with word recognition could arise at the point of segmenting time-varying visual information, but the mechanisms underlying this process are little understood. Here, we used random-dot displays to explore the effects of reading ability on temporal segmentation. Thirty-eight adult readers viewed test stimuli that were temporally segmented by constraining either local motions or analogous form cues to oscillate back and fourth at each of a range of rates. Participants had to discriminate these segmented patterns from comparison stimuli containing the same motion and form cues but these were temporally intermingled. Results showed that the motion and form tasks could not be performed reliably when segment duration was shorter than a temporal resolution (acuity) limit. The acuity limits for both tasks were significantly and negatively correlated with reading scores. Importantly, the minimum segment duration needed to detect the temporally segmented stimuli was longer in relatively poor readers than relatively good readers. This demonstrates that adult poor readers have difficulty segmenting temporally changing visual input particularly at short segment durations. These results are consistent with evidence suggesting that precise encoding of rapid time-varying information is impaired in developmental dyslexia

    Short-term monocular deprivation reduces inter-ocular suppression of the deprived eye

    Get PDF
    The adult visual system was traditionally thought to be relatively hard-wired, but recent studies have challenged this view by demonstrating plasticity following brief periods of monocular deprivation (Lunghi, Burr, & Morrone, 2011; Lunghi, Burr, & Morrone, 2013). When one eye was deprived of spatial information for 2-3 hours, sensory dominance was shifted in favour of the previously deprived eye. However, the mechanism underlying this phenomenon is unclear. The present study sought to address this issue and determine the consequences of short-term monocular deprivation on inter-ocular suppression of each eye. Sensory eye dominance was examined before and after depriving an eye of all visual input using a light-tight opaque patch for 2.5 hours, in a group of adult participants with normal binocular vision (N=6). We used a percept tracking task during experience of binocular rivalry (BR) to assess the relative dominance of the two eyes, and an objective probe detection task under continuous flash suppression (CFS) to quantify each eye’s susceptibility to inter-ocular suppression. In addition, the monocular contrast increment threshold of each eye was also measured using the probe detection task to ascertain if the altered eye dominance is accompanied by changes in monocular perception. Our BR results replicated Lunghi and colleagues’ findings of a shift of relative dominance towards the eye that has been deprived of form information with translucent patching. More crucially, using CFS we demonstrated reduced inter ocular suppression of the deprived eye with no complementary changes in the other eye, and no monocular changes in increment threshold. These findings imply that short-term monocular deprivation alters binocular interactions. The differential effect on inter-ocular suppression between eyes may have important implications for the use of patching as a therapy to recover visual function in amblyopia

    Assessing the reliability of web-based measurements of visual function

    Get PDF
    Many behavioural phenomena have been replicated using web-based experiments, but evaluation of the agreement between objective measures of web- and lab-based performance is required if scientists and clinicians are to reap the benefits of web-based testing. In this study, we investigated the reliability of a task which assesses early visual cortical function by evaluating the well-known ‘oblique effect’ (we are better at seeing horizontal and vertical edges than tilted ones) and the levels of agreement between remote, web-based measures and lab-based measures. Sixty-nine young participants (mean age, 21.8 years) performed temporal and spatial versions of a web-based, two-alternative forced choice (2AFC) orientation-identification task. In each case, orientation-identification thresholds (the minimum orientation difference at which a standard orientation could be reliably distinguished from a rotated comparison) were measured for cardinal (horizontal and vertical) and oblique orientations. Reliability was assessed in a subsample of 18 participants who performed the same tasks under laboratory conditions. Robust oblique effects were found, such that thresholds were substantially lower for cardinal orientations compared to obliques, for both web- and lab-based measures of the temporal and spatial 2AFC tasks. Crucially, web- and lab-based orientation-identification thresholds showed high levels of agreement, demonstrating the suitability of web-based testing for assessments of early visual cortical function. Future studies should assess the reliability of similar web-based tasks in clinical populations to evaluate their adoption into clinical settings, either to screen for visual anomalies or to assess changes in performance associated with progression of disease severity

    Phase-dependent interactions in visual cortex to combinations of first- and second-order stimuli

    Get PDF
    A fundamental task of the visual system is to extract figure-ground boundaries between objects, which are often defined not only by differences in luminance but also by "second order" contrast or texture differences. Responses of cortical neurons to both first- and second order patterns have been previously studied extensively, but only for responses to either type of stimulus in isolation. Here we examined responses of visual cortex neurons to the spatial relationship between superimposed periodic luminance modulation (LM) and contrast modulation (CM) stimuli, whose contrasts were adjusted to give equated responses when presented alone. Extracellular single unit recordings were made in area 18 of the cat, whose neurons show very similar responses to CM and LM stimuli as those in primate area V2 (Li et al, 2014). Most neurons showed a significant dependence on the relative phase of the combined LM and CM patterns, with a clear overall optimal response when they were approximately phase-aligned. The degree of this phase preference, and the contributions of suppressive and/or facilitatory interactions, varied considerably from one neuron to another. Such phase-dependent and phase-invariant responses were evident in both simple- and complex-type cells. These results place important constraints on any future model of the underlying neural circuitry for second-order responses. The diversity in the degree of phase dependence between LM and CM stimuli that we observe could help disambiguate different kinds of boundaries in natural scenes

    Encoding of rapid time-varying information is impaired in poor readers

    Get PDF
    A characteristic set of eye movements and fixations are made during reading, so the position of words on the retinae is constantly being updated. Effective decoding of print requires this temporal stream of visual information to be segmented or parsed into its constituent units (e.g., letters or words). Poor readers' difficulties with word recognition could arise at the point of segmenting time-varying visual information, but the mechanisms underlying this process are little understood. Here, we used random-dot displays to explore the effects of reading ability on temporal segmentation. Thirty-eight adult readers viewed test stimuli that were temporally segmented by constraining either local motions or analogous form cues to oscillate back and fourth at each of a range of rates. Participants had to discriminate these segmented patterns from comparison stimuli containing the same motion and form cues but these were temporally intermingled. Results showed that the motion and form tasks could not be performed reliably when segment duration was shorter than a temporal resolution (acuity) limit. The acuity limits for both tasks were significantly and negatively correlated with reading scores. Importantly, the minimum segment duration needed to detect the temporally segmented stimuli was longer in relatively poor readers than relatively good readers. This demonstrates that adult poor readers have difficulty segmenting temporally changing visual input particularly at short segment durations. These results are consistent with evidence suggesting that precise encoding of rapid time-varying information is impaired in developmental dyslexia
    corecore