152 research outputs found
Detecting Semantic Concepts from Video Using Temporal Gradients and Audio Classification
Human Wavelength Discrimination of Monochromatic Light Explained by Optimal Wavelength Decoding of Light of Unknown Intensity
We show that human ability to discriminate the wavelength of monochromatic light
can be understood as maximum likelihood decoding of the cone absorptions, with a
signal processing efficiency that is independent of the wavelength. This work is
built on the framework of ideal observer analysis of visual discrimination used
in many previous works. A distinctive aspect of our work is that we highlight a
perceptual confound that observers should confuse a change in input light
wavelength with a change in input intensity. Hence a simple ideal observer model
which assumes that an observer has a full knowledge of input intensity should
over-estimate human ability in discriminating wavelengths of two inputs of
unequal intensity. This confound also makes it difficult to consistently measure
human ability in wavelength discrimination by asking observers to distinguish
two input colors while matching their brightness. We argue that the best
experimental method for reliable measurement of discrimination thresholds is the
one of Pokorny and Smith, in which observers only need to distinguish two
inputs, regardless of whether they differ in hue or brightness. We
mathematically formulate wavelength discrimination under this
wavelength-intensity confound and show a good agreement between our theoretical
prediction and the behavioral data. Our analysis explains why the discrimination
threshold varies with the input wavelength, and shows how sensitively the
threshold depends on the relative densities of the three types of cones in the
retina (and in particular predict discriminations in dichromats). Our
mathematical formulation and solution can be applied to general problems of
sensory discrimination when there is a perceptual confound from other sensory
feature dimensions
Ganglion Cell Adaptability: Does the Coupling of Horizontal Cells Play a Role?
Background: The visual system can adjust itself to different visual environments. One of the most well known examples of this is the shift in spatial tuning that occurs in retinal ganglion cells with the change from night to day vision. This shift is thought to be produced by a change in the ganglion cell receptive field surround, mediated by a decrease in the coupling of horizontal cells. Methodology/Principal Findings: To test this hypothesis, we used a transgenic mouse line, a connexin57-deficient line, in which horizontal cell coupling was abolished. Measurements, both at the ganglion cell level and the level of behavioral performance, showed no differences between wild-type retinas and retinas with decoupled horizontal cells from connexin57-deficient mice. Conclusion/Significance: This analysis showed that the coupling and uncoupling of horizontal cells does not play a dominant role in spatial tuning and its adjustability to night and day light conditions. Instead, our data suggest that anothe
The what and why of perceptual asymmetries in the visual domain
Perceptual asymmetry is one of the most important characteristics of our visual
functioning. We carefully reviewed the scientific literature in order to examine
such asymmetries, separating them into two major categories: within-visual field
asymmetries and between-visual field asymmetries. We explain these asymmetries
in terms of perceptual aspects or tasks, the what of the
asymmetries; and in terms of underlying mechanisms, the why of
the asymmetries. Tthe within-visual field asymmetries are fundamental to
orientation, motion direction, and spatial frequency processing. between-visual
field asymmetries have been reported for a wide range of perceptual phenomena.
foveal dominance over the periphery, in particular, has been prominent for
visual acuity, contrast sensitivity, and colour discrimination. Tthis also holds
true for object or face recognition and reading performance. upper-lower visual
field asymmetries in favour of the lower have been demonstrated for temporal and
contrast sensitivities, visual acuity, spatial resolution, orientation, hue and
motion processing. Iin contrast, the upper field advantages have been seen in
visual search, apparent size, and object recognition tasks. left-right visual
field asymmetries include the left field dominance in spatial (e.g.,
orientation) processing and the right field dominance in non-spatial (e.g.,
temporal) processing. left field is also better at low spatial frequency or
global and coordinate spatial processing, whereas the right field is better at
high spatial frequency or local and categorical spatial processing. All these
asymmetries have inborn neural/physiological origins, the primary
why, but can be also susceptible to visual experience, the
critical why (promotes or blocks the asymmetries by
altering neural functions)
“I Look in Your Eyes, Honey”: Internal Face Features Induce Spatial Frequency Preference for Human Face Processing
Numerous psychophysical experiments found that humans preferably rely on a narrow
band of spatial frequencies for recognition of face identity. A recently
conducted theoretical study by the author suggests that this frequency
preference reflects an adaptation of the brain's face processing
machinery to this specific stimulus class (i.e., faces). The purpose of the
present study is to examine this property in greater detail and to specifically
elucidate the implication of internal face features (i.e., eyes, mouth, and
nose). To this end, I parameterized Gabor filters to match the spatial receptive
field of contrast sensitive neurons in the primary visual cortex (simple and
complex cells). Filter responses to a large number of face images were computed,
aligned for internal face features, and response-equalized
(“whitened”). The results demonstrate that the frequency
preference is caused by internal face features. Thus, the psychophysically
observed human frequency bias for face processing seems to be specifically
caused by the intrinsic spatial frequency content of internal face features
Spatial Stereoresolution for Depth Corrugations May Be Set in Primary Visual Cortex
Stereo “3D” depth perception requires the visual system to extract binocular disparities between the two eyes' images. Several current models of this process, based on the known physiology of primary visual cortex (V1), do this by computing a piecewise-frontoparallel local cross-correlation between the left and right eye's images. The size of the “window” within which detectors examine the local cross-correlation corresponds to the receptive field size of V1 neurons. This basic model has successfully captured many aspects of human depth perception. In particular, it accounts for the low human stereoresolution for sinusoidal depth corrugations, suggesting that the limit on stereoresolution may be set in primary visual cortex. An important feature of the model, reflecting a key property of V1 neurons, is that the initial disparity encoding is performed by detectors tuned to locally uniform patches of disparity. Such detectors respond better to square-wave depth corrugations, since these are locally flat, than to sinusoidal corrugations which are slanted almost everywhere. Consequently, for any given window size, current models predict better performance for square-wave disparity corrugations than for sine-wave corrugations at high amplitudes. We have recently shown that this prediction is not borne out: humans perform no better with square-wave than with sine-wave corrugations, even at high amplitudes. The failure of this prediction raised the question of whether stereoresolution may actually be set at later stages of cortical processing, perhaps involving neurons tuned to disparity slant or curvature. Here we extend the local cross-correlation model to include existing physiological and psychophysical evidence indicating that larger disparities are detected by neurons with larger receptive fields (a size/disparity correlation). We show that this simple modification succeeds in reconciling the model with human results, confirming that stereoresolution for disparity gratings may indeed be limited by the size of receptive fields in primary visual cortex
- …