15,117 research outputs found

    A Contrast- and Luminance-Driven Multiscale Netowrk Model of Brightness Perception

    Full text link
    A neural network model of brightness perception is developed to account for a wide variety of data, including the classical phenomenon of Mach bands, low- and high-contrast missing fundamental, luminance staircases, and non-linear contrast effects associated with sinusoidal waveforms. The model builds upon previous work on filling-in models that produce brightness profiles through the interaction of boundary and feature signals. Boundary computations that are sensitive to luminance steps and to continuous lumi- nance gradients are presented. A new interpretation of feature signals through the explicit representation of contrast-driven and luminance-driven information is provided and directly addresses the issue of brightness "anchoring." Computer simulations illustrate the model's competencies.Air Force Office of Scientific Research (F49620-92-J-0334); Northeast Consortium for Engineering Education (NCEE-A303-21-93); Office of Naval Research (N00014-91-J-4100); German BMFT grant (413-5839-01 1N 101 C/1); CNPq and NUTES/UFRJ, Brazi

    Effects of Highlights on Gloss Perception

    Full text link
    The perception of a glossy surface in a static monochromatic image can occur when a bright highlight is embedded in a compatible context of shading and a bounding contour. Some images naturally give rise to the impression that a surface has a uniform reflectance, characteristic of a shiny object, even though the highlight may only cover a small portion of the surface. Nonetheless, an observer may adopt an attitude of scrutiny in viewing a glossy surface, whereby the impression of gloss is partial and nonuniform at image regions outside of a higlight. Using a rating scale and small probe points to indicate image locations, differential perception of gloss within a single object is investigate in the present study. Observers' gloss ratings are not uniform across the surface, but decrease as a function of distance from highlight. When, by design, the distance from a highlight is uncoupled from the luminance value at corresponding probe points, the decrease in rated gloss correlates more with the distance than with the luminance change. Experiments also indicate that gloss ratings change as a function of estimated surface distance, rather than as a function of image distance. Surface continuity affects gloss ratings, suggesting that apprehension of 3D surface structure is crucial for gloss perception.Air Force Office of Scientific Research (F49620-98-1-0108), Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409), National Science Foundation (IIS-97-20333); Office of Naval Research (N00014-95-1-0657, N00014-01-1-0624); Whitaker Foundation (RG-99-0186

    A Neural Network Model of 3-D Lightness Perception

    Full text link
    A neural network model of 3-D lightness perception is presented which builds upon the FACADE Theory Boundary Contour System/Feature Contour System of Grossberg and colleagues. Early ratio encoding by retinal ganglion neurons as well as psychophysical results on constancy across different backgrounds (background constancy) are used to provide functional constraints to the theory and suggest a contrast negation hypothesis which states that ratio measures between coplanar regions are given more weight in the determination of lightness of the respective regions. Simulations of the model address data on lightness perception, including the coplanar ratio hypothesis, the Benary cross and VVhite's illusion.Air Force Office of Scientific Research (F49620-92-J-0334); Office of Naval Research (N00014-91-J-4100); HNC SC-94-00

    Humans perceive flicker artifacts at 500 Hz.

    Get PDF
    Humans perceive a stable average intensity image without flicker artifacts when a television or monitor updates at a sufficiently fast rate. This rate, known as the critical flicker fusion rate, has been studied for both spatially uniform lights, and spatio-temporal displays. These studies have included both stabilized and unstablized retinal images, and report the maximum observable rate as 50-90 Hz. A separate line of research has reported that fast eye movements known as saccades allow simple modulated LEDs to be observed at very high rates. Here we show that humans perceive visual flicker artifacts at rates over 500 Hz when a display includes high frequency spatial edges. This rate is many times higher than previously reported. As a result, modern display designs which use complex spatio-temporal coding need to update much faster than conventional TVs, which traditionally presented a simple sequence of natural images

    Linking Visual Development and Learning to Information Processing: Preattentive and Attentive Brain Dynamics

    Full text link
    National Science Foundation (SBE-0354378); Office of Naval Research (N00014-95-1-0657

    Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    Get PDF
    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.Published versio
    • …
    corecore