29,988 research outputs found

    Preserved local but disrupted contextual figure-ground influences in an individual with abnormal function of intermediate visual areas

    Get PDF
    Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG

    The Perceptual Genesis of Near Versus Far in Pictorial Stimuli

    Full text link
    The experiments reported herein probe the visual cortical mechanisms that control near-far percepts in response to two-dimensional stimuli. Figural contrast is found to be a principal factor for the emergence of percepts of near versus far in pictorial stimuli, especially when stimulus duration is brief. Pictorial factors such as interposition (Experiment 1) and partial occlusion (Experiments 2 and 3) may cooperate or compete with contrast factors, in the manner predicted by the FACADE model. In particular, if the geometrical configuration of an image favors activation of cortical bipole grouping cells, as at the top of aT-junction, then this advantage can cooperate with the contrast of the configuration to facilitate a near-far percept at a lower contrast than at an X-junction. The more balanced bipole competition in the X-junction case takes longer to resolve than in the T-junction case (Experiment 3).Human Frontier Science Program Organization (SF9/98); Defense Research Projects Agency and the Office of Naval Research (N00014-92-J-I309, N00014-95-1-0494, N00014-95-1-0657

    The Perceptual Genesis of Near Versus Far in Pictorial Stimuli

    Full text link
    The experiments reported herein probe the visual cortical mechanisms that control near-far percepts in response to two-dimensional stimuli. Figural contrast is found to be a principal factor for the emergence of percepts of near versus far in pictorial stimuli, especially when stimulus duration is brief. Pictorial factors such as interposition (Experiment 1) and partial occlusion (Experiments 2 and 3) may cooperate or compete with contrast factors, in the manner predicted by the FACADE model. In particular, if the geometrical configuration of an image favors activation of cortical bipole grouping cells, as at the top of aT-junction, then this advantage can cooperate with the contrast of the configuration to facilitate a near-far percept at a lower contrast than at an X-junction. The more balanced bipole competition in the X-junction case takes longer to resolve than in the T-junction case (Experiment 3).Human Frontier Science Program Organization (SF9/98); Defense Research Projects Agency and the Office of Naval Research (N00014-92-J-I309, N00014-95-1-0494, N00014-95-1-0657

    Three dimensional transparent structure segmentation and multiple 3D motion estimation from monocular perspective image sequences

    Get PDF
    A three dimensional scene can be segmented using different cues, such as boundaries, texture, motion, discontinuities of the optical flow, stereo, models for structure, etc. We investigate segmentation based upon one of these cues, namely three dimensional motion. If the scene contain transparent objects, the two dimensional (local) cues are inconsistent, since neighboring points with similar optical flow can correspond to different objects. We present a method for performing three dimensional motion-based segmentation of (possibly) transparent scenes together with recursive estimation of the motion of each independent rigid object from monocular perspective images. Our algorithm is based on a recently proposed method for rigid motion reconstruction and a validation test which allows us to initialize the scheme and detect outliers during the motion estimation procedure. The scheme is tested on challenging real and synthetic image sequences. Segmentation is performed for the Ullmann's experiment of two transparent cylinders rotating about the same axis in opposite directions

    Laminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perception

    Full text link
    How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.Air Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-02-35398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624

    Neural Models of Motion Integration, Segmentation, and Probablistic Decision-Making

    Full text link
    When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Object-guided Spatial Attention in Touch: Holding the Same Object with Both Hands Delays Attentional Selection

    Get PDF
    Abstract Previous research has shown that attention to a specific location on a uniform visual object spreads throughout the entire object. Here we demonstrate that, similar to the visual system, spatial attention in touch can be object guided. We measured event-related brain potentials to tactile stimuli arising from objects held by observers' hands, when the hands were placed either near each other or far apart, holding two separate objects, or when they were far apart but holding a common object. Observers covertly oriented their attention to the left, to the right, or to both hands, following bilaterally presented tactile cues indicating likely tactile target location(s). Attentional modulations for tactile stimuli at attended compared to unattended locations were present in the time range of early somatosensory components only when the hands were far apart, but not when they were near. This was found to reflect enhanced somatosensory processing at attended locations rather than suppressed processing at unattended locations. Crucially, holding a common object with both hands delayed attentional selection, similar to when the hands were near. This shows that the proprioceptive distance effect on tactile attentional selection arises when distant event locations can be treated as separate and unconnected sources of tactile stimulation, but not when they form part of the same object. These findings suggest that, similar to visual attention, both space- and object-based attentional mechanisms can operate when we select between tactile events on our body surface.</jats:p

    A demonstration of 'broken' visual space

    Get PDF
    It has long been assumed that there is a distorted mapping between real and ‘perceived’ space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A > B > D yet also A < C < D) and hence no single one-to-one mapping between participants’ perceived space and any real 3D environment. Instead, factors that affect pairwise comparisons of distances dictate participants’ performance. These data contradict, more directly than previous experiments, the idea that the visual system builds and uses a coherent 3D internal representation of a scene

    SOVEREIGN: An Autonomous Neural System for Incrementally Learning Planned Action Sequences to Navigate Towards a Rewarded Goal

    Full text link
    How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goaloriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and sizeinvariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.Riverside Reserach Institute; Defense Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0225); National Science Foundation (IRI 90-24877, SBE-0345378); Office of Naval Research (N00014-92-J-1309, N00014-91-J-4100, N00014-01-1-0624, N00014-01-1-0624); Pacific Sierra Research (PSR 91-6075-2
    corecore