150 research outputs found
Relational color constancy in achromatic and isoluminant images
Relational color constancy, which refers to the constancy of perceived relations between surface colors under
changes in illuminant, may be based on the computation of spatial ratios of cone excitations. As this activity
need occur only within rather than between cone pathways, relational color constancy might be assumed to be
based on relative luminance processing. This hypothesis was tested in a psychophysical experiment in which
observers viewed simulated images of Mondrian patterns undergoing colorimetric changes that could be attributed
either to an illuminant change or to a nonilluminant change; the images were isoluminant, achromatic,
or unmodified. Observers reliably discriminated the two types of changes in all three conditions, implying
that relational color constancy is not based on luminance cues alone. A computer simulation showed
that in these isoluminant and achromatic images spatial ratios of cone excitations and of combinations of cone
excitations were almost invariant under illuminant changes and that discrimination performance could be predicted
from deviations in these ratios.Biotechnology and Biological Sciences Research Council (BBSRC
Low levels of specularity support operational color constancy, particularly when surface and illumination geometry can be inferred
We tested whether surface specularity alone supports operational color constancy—the ability to discriminate changes in illumination or reflectance. Observers viewed short animations of illuminant or reflectance changes in rendered scenes containing a single spherical surface and were asked to classify the change. Performance improved with increasing specularity, as predicted from regularities in chromatic statistics. Peak performance was impaired by spatial rearrangements of image pixels that disrupted the perception of illuminated surfaces but was maintained with increased surface complexity. The characteristic chromatic transformations that are available with nonzero specularity are useful for operational color constancy, particularly if accompanied by appropriate perceptual organization
Recommended from our members
Shading Beats Binocular Disparity in Depth from Luminance Gradients: Evidence against a Maximum Likelihood Principle for Cue Combination
Perceived depth is conveyed by multiple cues, including binocular disparity and luminance shading. Depth perception from luminance shading information depends on the perceptual assumption for the incident light, which has been shown to default to a diffuse illumination assumption. We focus on the case of sinusoidally corrugated surfaces to ask how shading and disparity cues combine defined by the joint luminance gradients and intrinsic disparity modulation that would occur in viewing the physical corrugation of a uniform surface under diffuse illumination. Such surfaces were simulated with a sinusoidal luminance modulation (0.26 or 1.8 cy/deg, contrast 20%-80%) modulated either in-phase or in opposite phase with a sinusoidal disparity of the same corrugation frequency, with disparity amplitudes ranging from 0’-20’. The observers’ task was to adjust the binocular disparity of a comparison random-dot stereogram surface to match the perceived depth of the joint luminance/disparitymodulated corrugation target. Regardless of target spatial frequency, the perceived target depth increased with the luminance contrast and depended on luminance phase but was largely unaffected by the luminance disparity modulation. These results validate the idea that human observers can use the diffuse illumination assumption to perceive depth from luminance gradients alone without making an assumption of light direction. For depth judgments with combined cues, the observers gave much greater weighting to the luminance shading than to the disparity modulation of the targets. The results were not well-fit by a Bayesian cue-combination model weighted in proportion to the variance of the measurements for each cue in isolation. Instead, they suggest that the visual system uses disjunctive mechanisms to process these two types of information rather than combining them according to their likelihood ratios
Neurolinguistics Research Advancing Development of a Direct-Speech Brain-Computer Interface
A direct-speech brain-computer interface (DS-BCI) acquires neural signals corresponding to imagined speech, then processes and decodes these signals to produce a linguistic output in the form of phonemes, words, or sentences. Recent research has shown the potential of neurolinguistics to enhance decoding approaches to imagined speech with the inclusion of semantics and phonology in experimental procedures. As neurolinguistics research findings are beginning to be incorporated within the scope of DS-BCI research, it is our view that a thorough understanding of imagined speech, and its relationship with overt speech, must be considered an integral feature of research in this field. With a focus on imagined speech, we provide a review of the most important neurolinguistics research informing the field of DS-BCI and suggest how this research may be utilized to improve current experimental protocols and decoding techniques. Our review of the literature supports a cross-disciplinary approach to DS-BCI research, in which neurolinguistics concepts and methods are utilized to aid development of a naturalistic mode of communication. : Cognitive Neuroscience; Computer Science; Hardware Interface Subject Areas: Cognitive Neuroscience, Computer Science, Hardware Interfac
Sun and sky: Does human vision assume a mixture of point and diffuse illumination when interpreting shape-from-shading?
AbstractPeople readily perceive smooth luminance variations as being due to the shading produced by undulations of a 3-D surface (shape-from-shading). In doing so, the visual system must simultaneously estimate the shape of the surface and the nature of the illumination. Remarkably, shape-from-shading operates even when both these properties are unknown and neither can be estimated directly from the image. In such circumstances humans are thought to adopt a default illumination model. A widely held view is that the default illuminant is a point source located above the observer’s head. However, some have argued instead that the default illuminant is a diffuse source. We now present evidence that humans may adopt a flexible illumination model that includes both diffuse and point source elements. Our model estimates a direction for the point source and then weights the contribution of this source according to a bias function. For most people the preferred illuminant direction is overhead with a strong diffuse component
Confusion and dependence in uses of history
Many people argue that history makes a special difference to the subjects of biology and psychology, and that history does not make this special difference to other parts of the world. This paper will show that historical properties make no more or less of a difference to biology or psychology than to chemistry, physics, or other sciences. Although historical properties indeed make a certain kind of difference to biology and psychology, this paper will show that historical properties make the same kind of difference to geology, sociology, astronomy, and other sciences. Similarly, many people argue that nonhistorical properties make a special difference to the nonbiological and the nonpsychological world. This paper will show that nonhistorical properties make the same difference to all things in the world when it comes to their causal behavior and that historical properties make the same difference to all things in the world when it comes to their distributions. Although history is special, it is special in the same way to all parts of the worl
- …