140,651 research outputs found

    Cortical Dynamics of Boundary Segmentation and Reset: Persistence, Afterimages, and Residual Traces

    Full text link
    Using a neural network model of boundary segmentation and reset, Francis, Grossberg, and Mingolla (1994) linked the percept of persistence to the duration of a boundary segmentation after stimulus offset. In particular, the model simulated the decrease of persistence duration with an increase in stimulus duration and luminance. Thc present article reveals further evidence for the neural mechanisms used by the theory. Simulations show that the model reset signals generate orientational afterimages, such as the MacKay effect, when the reset signals can be grouped by a subsequent boundary segmentation that generates illusory contours through them. Simulations also show that the same mechanisms explain properties of residual traces, which increase in duration with stimulus duration and luminance. The model hereby discloses previously unsuspected mechanistic links between data about persistence and afterimages, and helps to clarify the sometimes controversial issues surrounding distinctions between persistence, residual traces, and afterimages.Air Force Office of Scientific Research (F49620-92-J-0499); Office of Naval Research (N00014-91-J-4100, N00014-92-J-4015

    Texture Segregation By Visual Cortex: Perceptual Grouping, Attention, and Learning

    Get PDF
    A neural model is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model brings together five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits a large set of human psychophysical data on orientation-based textures. Object boundar output of the model is compared to computer vision algorithms using a set of human segmented photographic images. The model classifies textures and suppresses noise using a multiple scale oriented filterbank and a distributed Adaptive Resonance Theory (dART) classifier. The matched signal between the bottom-up texture inputs and top-down learned texture categories is utilized by oriented competitive and cooperative grouping processes to generate texture boundaries that control surface filling-in and spatial attention. Topdown modulatory attentional feedback from boundary and surface representations to early filtering stages results in enhanced texture boundaries and more efficient learning of texture within attended surface regions. Surface-based attention also provides a self-supervising training signal for learning new textures. Importance of the surface-based attentional feedback in texture learning and classification is tested using a set of textured images from the Brodatz micro-texture album. Benchmark studies vary from 95.1% to 98.6% with attention, and from 90.6% to 93.2% without attention.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A Neural Model of Surface Perception: Lightness, Anchoring, and Filling-in

    Full text link
    This article develops a neural model of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models have clarified how the brain can compute the relative contrast of images from variably illuminate scenes. How the brain determines an absolute lightness scale that "anchors" percepts of surface lightness to us the full dynamic range of neurons remains an unsolved problem. Lightness anchoring properties include articulation, insulation, configuration, and are effects. The model quantatively simulates these and other lightness data such as discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, and the Craik-O'Brien-Cornsweet illusion. The model also clarifies the functional significance for lightness perception of anatomical and neurophysiological data, including gain control at retinal photoreceptors, and spatioal contrast adaptation at the negative feedback circuit between the inner segment of photoreceptors and interacting horizontal cells. The model retina can hereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A later model cortical processing stages, boundary representations gate the filling-in of surface lightness via long-range horizontal connections. Variants of this filling-in mechanism run 100-1000 times faster than diffusion mechanisms of previous biological filling-in models, and shows how filling-in can occur at realistic speeds. A new anchoring mechanism called the Blurred-Highest-Luminance-As-White (BHLAW) rule helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural images under variable lighting conditions.Air Force Office of Scientific Research (F49620-01-1-0397); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); Office of Naval Research (N00014-01-1-0624

    How Does the Cerebral Cortex Work? Developement, Learning, Attention, and 3D Vision by Laminar Circuits of Visual Cortex

    Full text link
    A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize the processes of developement, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable developement and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical developement, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work.Air Force Office of Scientific Research (F49620-01-1-0397); Office of Naval Research (N00014-01-1-0624

    A Neural Network Model of 3-D Lightness Perception

    Full text link
    A neural network model of 3-D lightness perception is presented which builds upon the FACADE Theory Boundary Contour System/Feature Contour System of Grossberg and colleagues. Early ratio encoding by retinal ganglion neurons as well as psychophysical results on constancy across different backgrounds (background constancy) are used to provide functional constraints to the theory and suggest a contrast negation hypothesis which states that ratio measures between coplanar regions are given more weight in the determination of lightness of the respective regions. Simulations of the model address data on lightness perception, including the coplanar ratio hypothesis, the Benary cross and VVhite's illusion.Air Force Office of Scientific Research (F49620-92-J-0334); Office of Naval Research (N00014-91-J-4100); HNC SC-94-00

    Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    Get PDF
    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.Published versio

    Contour Integration Across Polarities and Spatial Gaps: From Local Contrast Filtering to Global Grouping

    Full text link
    This article introduces an experimental paradigm to selectively probe the multiple levels of visual processing that influence the formation of object contours, perceptual boundaries, and illusory contours. The experiments test the assumption that, to integrate contour information across space and contrast sign, a spatially short-range filtering process that is sensitive to contrast polarity inputs to a spatially long-range grouping process that pools signals from opposite contrast polarities. The stimuli consisted of thin subthreshold lines, flashed upon gaps between collinear inducers which potentially enable the formation of illusory contours. The subthreshold lines were composed of one or more segments with opposite contrast polarities. The polarity nearest to the inducers was varied to differentially excite the short-range filtering process. The experimental results are consistent with neurophysiological evidence for cortical mechanisms of contour processing and with the Boundary Contour System model, which identifies the short-range filtering process with cortical simple cells, and the long-range grouping process with cortical bipole cells.Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657); Centre National de la Recherche Scientifique (France) URA (1939

    View-Invariant Object Category Learning, Recognition, and Search: How Spatial and Object Attention Are Coordinated Using Surface-Based Attentional Shrouds

    Full text link
    Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Spatial Facilitation by Color and Luminance Edges: Boundary, Surface, and Attentional Factors

    Full text link
    The thresholds of human observers detecting line targets improve significantly when the targets are presented in a spatial context of collinear inducing stimuli. This phenomenon is referred to as 'spatial facilitation', and may reflect the output of long-range interactions between cortical feature detectors. Spatial facilitation has thus far been observed with luminance-defined, achromatic stimuli on achromatic backgrounds. This study compares spatial facilitation with line targets and collinear, edge-like inducers defined by luminance contrast to spatial facilitation with targets and inducers defined by color contrast. The results of a first experiment show that achromatic inducers facilitate the detection of achromatic targets on gray and colored backgrounds, but not the detection of chromatic targets. Chromatic inducers facilitate the detection of chromatic targets on gray and colored backgrounds, but not the detection of achromatic targets. Chromatic spatial facilitation appears to be strongest when inducers and background are isoluminant. The results of a second experiment show that spatial facilitation with chromatic targets and inducers requires a longer exposure duration of the inducers than spatial facilitation with achromatic targets and inducers, which is already fully effective at an inducer exposure of 30 milliseconds only. The findings point towards two separate mechanisms for spatial facilitation with collinear form stimuli: one that operates in the domain of luminance, and one that operates in the domain of color contrast. These results are consistent with neural models of boundary and surface formation which suggest that achromatic and chromatic visual cues are represented on different cortical surface representations that are capable of selectively attracting attention. Multiple copies of these achromatic and chromatic surface representations exist corresponding to different ranges of perceived depth from an observer, and each can attract attention to itself. Color and contrast differences between inducing and test stimuli, and transient responses to inducing stimuli, can cause attention to shift across these surface representations in ways that sometimes enhance and sometimes interfere with target detection.Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657
    • …
    corecore