4,903 research outputs found

    Neural dynamics of feedforward and feedback processing in figure-ground segregation

    Get PDF
    Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation

    Psychophysical evidence for competition between real and illusory contour processing

    Get PDF
    Luminance defined and illusory contours provide vital information about object borders. However, real and illusory contour cues tend to be used under different contexts and can interfere with one another. Although some cells in visual cortex process both real and illusory contours equivalently, recent studies (Ramsden et al 2001) suggest competitive interactions between real (feedforward) and illusory (feedback) contour processing in primate V1 and V2. To test this hypothesis psychophysically, we designed stimuli in which illusory contours are presented with and without the presence of real line components. If real and illusory contour cues are processed by the same mechanism, then the presence of both cues should enhance the percept. If the illusory percept is degraded by the presence of real lines, then independent real and illusory mechanisms are suggested. The perception of a Kanisza-triangle, presented for 250 msec, was measured under three conditions: 1) virtual contour alone, 2) with a short parallel real line superimposed on the virtual contour or 3) with a short orthogonal real line abutting the virtual contour. The real lines were varied from sub- to supra-threshold contrasts. In a 2AFC paradigm three subjects fixated on a spot in the triangle center and indicated whether the side of the triangle was bent outwards or inwards. We found that real lines degraded the percept of the illusory contour (i.e. increased angular thresholds). Such interference occurred even at subthreshold real line contrasts and, in some subjects, was greater for the parallel than orthogonal real line. Our results support the presence of separate mechanisms for the processing of real and illusory contours and suggest that, under some circumstances, real cues can interfere with the processing of illusory cues. We suggest that such interferences occurs by the feedforward influences of the lines which interfere with the feedback influences prominent during illusory contour processing

    Functional Organization of Visual Cortex in the Owl Monkey

    Get PDF
    In this study, we compared the organization of orientation preference in visual areas V1, V2, and V3. Within these visual areas, we also quantified the relationship between orientation preference and cytochrome oxidase (CO) staining patterns. V1 maps of orientation preference contained both pinwheels and linear zones. The location of CO blobs did not relate in a systematic way to maps of orientation; although, as in other primates, there were approximately twice as many pinwheels as CO blobs. V2 contained bands of high and low orientation selectivity. The bands of high orientation selectivity were organized into pinwheels and linear zones, but iso-orientation domains were twice as large as those in V1. Quantitative comparisons between bands containing high or low orientation selectivity and CO dark and light bands suggested that at least four functional compartments exist in V2, CO dense bands with either high or low orientation selectivity, and CO light bands with either high or low selectivity. We also demonstrated that two functional compartments exist in V3, with zones of high orientation selectivity corresponding to CO dense areas and zones of low orientation selectivity corresponding to CO pale areas. Together with previous findings, these results suggest that the modular organization of V1 is similar across primates and indeed across most mammals. V2 organization in owl monkeys also appears similar to that of other simians but different from that of prosimians and other mammals. Finally, V3 of owl monkeys shows a compartmental organization for orientation selectivity that remains to be demonstrated in other primates

    Cortical Synchronization and Perceptual Framing

    Full text link
    How does the brain group together different parts of an object into a coherent visual object representation? Different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a process that resynchronizes cortical activities corresponding to the same retinal object. A neural network model is presented that is able to rapidly resynchronize clesynchronized neural activities. The model provides a link between perceptual and brain data. Model properties quantitatively simulate perceptual framing data, including psychophysical data about temporal order judgments and the reduction of threshold contrast as a function of stimulus length. Such a model has earlier been used to explain data about illusory contour formation, texture segregation, shape-from-shading, 3-D vision, and cortical receptive fields. The model hereby shows how many data may be understood as manifestations of a cortical grouping process that can rapidly resynchronize image parts which belong together in visual object representations. The model exhibits better synchronization in the presence of noise than without noise, a type of stochastic resonance, and synchronizes robustly when cells that represent different stimulus orientations compete. These properties arise when fast long-range cooperation and slow short-range competition interact via nonlinear feedback interactions with cells that obey shunting equations.Office of Naval Research (N00014-92-J-1309, N00014-95-I-0409, N00014-95-I-0657, N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0334, F49620-92-J-0225)

    A Neural Model of Surface Perception: Lightness, Anchoring, and Filling-in

    Full text link
    This article develops a neural model of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models have clarified how the brain can compute the relative contrast of images from variably illuminate scenes. How the brain determines an absolute lightness scale that "anchors" percepts of surface lightness to us the full dynamic range of neurons remains an unsolved problem. Lightness anchoring properties include articulation, insulation, configuration, and are effects. The model quantatively simulates these and other lightness data such as discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, and the Craik-O'Brien-Cornsweet illusion. The model also clarifies the functional significance for lightness perception of anatomical and neurophysiological data, including gain control at retinal photoreceptors, and spatioal contrast adaptation at the negative feedback circuit between the inner segment of photoreceptors and interacting horizontal cells. The model retina can hereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A later model cortical processing stages, boundary representations gate the filling-in of surface lightness via long-range horizontal connections. Variants of this filling-in mechanism run 100-1000 times faster than diffusion mechanisms of previous biological filling-in models, and shows how filling-in can occur at realistic speeds. A new anchoring mechanism called the Blurred-Highest-Luminance-As-White (BHLAW) rule helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural images under variable lighting conditions.Air Force Office of Scientific Research (F49620-01-1-0397); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); Office of Naval Research (N00014-01-1-0624

    Director Field Model of the Primary Visual Cortex for Contour Detection

    Full text link
    We aim to build the simplest possible model capable of detecting long, noisy contours in a cluttered visual scene. For this, we model the neural dynamics in the primate primary visual cortex in terms of a continuous director field that describes the average rate and the average orientational preference of active neurons at a particular point in the cortex. We then use a linear-nonlinear dynamical model with long range connectivity patterns to enforce long-range statistical context present in the analyzed images. The resulting model has substantially fewer degrees of freedom than traditional models, and yet it can distinguish large contiguous objects from the background clutter by suppressing the clutter and by filling-in occluded elements of object contours. This results in high-precision, high-recall detection of large objects in cluttered scenes. Parenthetically, our model has a direct correspondence with the Landau - de Gennes theory of nematic liquid crystal in two dimensions.Comment: 9 pages, 7 figure

    Neural Models of Motion Integration, Segmentation, and Probablistic Decision-Making

    Full text link
    When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Computing optical flow in the primate visual system

    Get PDF
    Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We show how gradient models, a well-known class of motion algorithms, can be implemented within the magnocellular pathway of the primate's visual system. Our cooperative algorithm computes optical flow in two steps. In the first stage, assumed to be located in primary visual cortex, local motion is measured while spatial integration occurs in the second stage, assumed to be located in the middle temporal area (MT). The final optical flow is extracted in this second stage using population coding, such that the velocity is represented by the vector sum of neurons coding for motion in different directions. Our theory, relating the single-cell to the perceptual level, accounts for a number of psychophysical and electrophysiological observations and illusions
    • …
    corecore