5,445 research outputs found

    Feedback and surround modulated boundary detection

    Get PDF
    Altres ajuts: CERCA Programme/Generalitat de CatalunyaEdges are key components of any visual scene to the extent that we can recognise objects merely by their silhouettes. The human visual system captures edge information through neurons in the visual cortex that are sensitive to both intensity discontinuities and particular orientations. The "classical approach" assumes that these cells are only responsive to the stimulus present within their receptive fields, however, recent studies demonstrate that surrounding regions and inter-areal feedback connections influence their responses significantly. In this work we propose a biologically-inspired edge detection model in which orientation selective neurons are represented through the first derivative of a Gaussian function resembling double-opponent cells in the primary visual cortex (V1). In our model we account for four kinds of receptive field surround, i.e. full, far, iso- and orthogonal-orientation, whose contributions are contrast-dependant. The output signal fromV1 is pooled in its perpendicular direction by larger V2 neurons employing a contrast-variant centre-surround kernel. We further introduce a feedback connection from higher-level visual areas to the lower ones. The results of our model on three benchmark datasets show a big improvement compared to the current non-learning and biologically-inspired state-of-the-art algorithms while being competitive to the learning-based methods

    Acetylcholine neuromodulation in normal and abnormal learning and memory: vigilance control in waking, sleep, autism, amnesia, and Alzheimer's disease

    Get PDF
    This article provides a unified mechanistic neural explanation of how learning, recognition, and cognition break down during Alzheimer's disease, medial temporal amnesia, and autism. It also clarifies whey there are often sleep disturbances during these disorders. A key mechanism is how acetylcholine modules vigilance control in cortical layer

    Texture Segregation By Visual Cortex: Perceptual Grouping, Attention, and Learning

    Get PDF
    A neural model is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model brings together five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits a large set of human psychophysical data on orientation-based textures. Object boundar output of the model is compared to computer vision algorithms using a set of human segmented photographic images. The model classifies textures and suppresses noise using a multiple scale oriented filterbank and a distributed Adaptive Resonance Theory (dART) classifier. The matched signal between the bottom-up texture inputs and top-down learned texture categories is utilized by oriented competitive and cooperative grouping processes to generate texture boundaries that control surface filling-in and spatial attention. Topdown modulatory attentional feedback from boundary and surface representations to early filtering stages results in enhanced texture boundaries and more efficient learning of texture within attended surface regions. Surface-based attention also provides a self-supervising training signal for learning new textures. Importance of the surface-based attentional feedback in texture learning and classification is tested using a set of textured images from the Brodatz micro-texture album. Benchmark studies vary from 95.1% to 98.6% with attention, and from 90.6% to 93.2% without attention.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A Neural Model of Surface Perception: Lightness, Anchoring, and Filling-in

    Full text link
    This article develops a neural model of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models have clarified how the brain can compute the relative contrast of images from variably illuminate scenes. How the brain determines an absolute lightness scale that "anchors" percepts of surface lightness to us the full dynamic range of neurons remains an unsolved problem. Lightness anchoring properties include articulation, insulation, configuration, and are effects. The model quantatively simulates these and other lightness data such as discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, and the Craik-O'Brien-Cornsweet illusion. The model also clarifies the functional significance for lightness perception of anatomical and neurophysiological data, including gain control at retinal photoreceptors, and spatioal contrast adaptation at the negative feedback circuit between the inner segment of photoreceptors and interacting horizontal cells. The model retina can hereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A later model cortical processing stages, boundary representations gate the filling-in of surface lightness via long-range horizontal connections. Variants of this filling-in mechanism run 100-1000 times faster than diffusion mechanisms of previous biological filling-in models, and shows how filling-in can occur at realistic speeds. A new anchoring mechanism called the Blurred-Highest-Luminance-As-White (BHLAW) rule helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural images under variable lighting conditions.Air Force Office of Scientific Research (F49620-01-1-0397); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); Office of Naval Research (N00014-01-1-0624

    A Neural Model of First-order and Second-order Motion Perception and Magnocellular Dynamics

    Full text link
    A neural model of motion perception simulates psychophysical data. concerning first-order and second-order motion stimuli, including the reversal of perceived motion direction with distance from the stimulus (I display), and data about directional judgments as a function of relative spatial phase or spatial and temporal frequency. Many other second-order motion percepts that have been ascribed to a second non-Fourier processing stream can also be explained in the model by interactions between ON and OFF cells within a single, neurobiologically interpreted magnocellular processing stream. Yet other percepts may be traced to interactions between form and motion processing streams, rather than to processing within multiple motion processing strea.ms. The model hereby explains why monkeys with lesions of the parvocellular layers, but not the magnocellular layers, of the lateral geniculate nucleus (LGN) are capable of detecting the correct direction of second-order motion, why most cells in area MT are sensitive to both first-order and second-order motion, and why after APB injection selectively blocks retinal ON bipolar cells, cortical cells are sensitive only to the motion of a moving bright bar's trailing edge. Magnoccllular LGN cells show relatively transient responses while parvoccllular LGN cells show relatively sustained responses. Correspondingly, the model bases its directional estimates on the outputs of model ON and OFF transient cells that are organized in opponent circuits wherein antagonistic rebounds occur in response to stimmulus offset. Center-surround interactions convert these ON and OFF outpr1ts into responses of lightening and darkening cells that are sensitive both to direct inputs and to rebound responses in their receptive field centers and surrounds. The total pattern of activity increments and decrements is used by subsequent processing stages (spatially short-range filters, competitive interactions, spatially long-range filters, and directional grouping cells) to dntermine the perceived direction of motion

    An Active Pattern Recognition Architecture for Mobile Robots

    Full text link
    An active, attentionally-modulated recognition architecture is proposed for object recognition and scene analysis. The proposed architecture forms part of navigation and trajectory planning modules for mobile robots. Key characteristics of the system include movement planning and execution based on environmental factors and internal goal definitions. Real-time implementation of the system is based on space-variant representation of the visual field, as well as an optimal visual processing scheme utilizing separate and parallel channels for the extraction of boundaries and stimulus qualities. A spatial and temporal grouping module (VWM) allows for scene scanning, multi-object segmentation, and featural/object priming. VWM is used to modulate a tn~ectory formation module capable of redirecting the focus of spatial attention. Finally, an object recognition module based on adaptive resonance theory is interfaced through VWM to the visual processing module. The system is capable of using information from different modalities to disambiguate sensory input.Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-92-J-1309); Consejo Nacional de Ciencia y TecnologĂ­a (63462

    Temporal Dynamics of Binocular Display Processing with Corticogeniculate Interactions

    Full text link
    A neural model of binocular vision is developed to simulate psychophysical and neurobiological data concerning the dynamics of binocular disparity processing. The model shows how feedforward and feedback interactions among LGN ON and OFF cells and cortical simple, complex, and hypercomplex cells can simulate binocular summation, the Pulfrich effect, and the fusion of delayed anticorrelated stereograms. Model retinal ON and OFF cells are linked by an opponent process capable of generating antagonistic rebounds from OFF cells after offset of an ON cell input. Spatially displaced ON and OFF cells excite simple cells. Opposite polarity simple cells compete before their half-wave rectified outputs excite complex cells. Complex cells binocularly match like-polarity simple cell outputs before pooling half-wave rectified signals frorn opposite polarities. Competitive feedback among complex cells leads to sharpening of disparity selectivity and normalizes cell activity. Slow inhibitory interneurons help to reset complex cells after input offset. The Pulfrich effect occurs because the delayed input from the one eye fuses with the present input from the other eye to create a disparity. Binocular summation occurs for stimuli of brief duration or of low contrast because competitive normalization takes time, and cannot occur for very brief or weak stimuli. At brief SOAs, anticorrelatecd stereograms can be fused because the rebound mechanism ensures that the present image to one eye can fuse with the afterimage from a previous image to the other eye. Corticogeniculate feedback embodies a matching process that enhances the speed and temporal accuracy of complex cell disparity tuning. Model mechanisms interact to control the stable development of sharp disparity tuning.Air Force Office of Scientific Research (F19620-92-J-0499, F49620-92-J-0334, F49620-92-J-0225); Office of Naval Research (N00014-95-1-0409, N00014-95-l-0657, N00014-92-J-1015, N00014-91-J-4100

    Laminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perception

    Full text link
    How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.Air Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-02-35398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624
    • …
    corecore