10,692 research outputs found
Neural dynamics of invariant object recognition: relative disparity, binocular fusion, and predictive eye movements
How does the visual cortex learn invariant object categories as an observer scans
a depthful scene? Two neural processes that contribute to this ability are modeled in this
thesis.
The first model clarifies how an object is represented in depth. Cortical area V1
computes absolute disparity, which is the horizontal difference in retinal location of an
image in the left and right foveas. Many cells in cortical area V2 compute relative
disparity, which is the difference in absolute disparity of two visible features. Relative,
but not absolute, disparity is unaffected by the distance of visual stimuli from an
observer, and by vergence eye movements. A laminar cortical model of V2 that includes
shunting lateral inhibition of disparity-sensitive layer 4 cells causes a peak shift in cell
responses that transforms absolute disparity from V1 into relative disparity in V2.
The second model simulates how the brain maintains stable percepts of a 3D
scene during binocular movements. The visual cortex initiates the formation of a 3D boundary and surface representation by binocularly fusing corresponding features from
the left and right retinotopic images. However, after each saccadic eye movement, every
scenic feature projects to a different combination of retinal positions than before the
saccade. Yet the 3D representation, resulting from the prior fusion, is stable through the
post-saccadic re-fusion. One key to stability is predictive remapping: the system
anticipates the new retinal positions of features entailed by eye movements by using gain
fields that are updated by eye movement commands. The 3D ARTSCAN model
developed here simulates how perceptual, attentional, and cognitive interactions across
different brain regions within the What and Where visual processing streams interact to
coordinate predictive remapping, stable 3D boundary and surface perception, spatial
attention, and the learning of object categories that are invariant to changes in an object's
retinal projections. Such invariant learning helps the system to avoid treating each new
view of the same object as a distinct object to be learned. The thesis hereby shows how a
process that enables invariant object category learning can be extended to also enable
stable 3D scene perception
A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
Store Working Memory Networks for Storage and Recall of Arbitrary Temporal Sequences
Neural network models of working memory, called Sustained Temporal Order REcurrent (STORE) models, are described. They encode the invariant temporal order of sequential events in short term memory (STM) in a way that mimics cognitive data about working memory, including primacy, recency, and bowed order and error gradients. As new items are presented, the pattern of previously stored items is invariant in the sense that, relative activations remain constant through time. This invariant temporal order code enables all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed to design self-organizing temporal recognition and planning systems in which any subsequence of events may need to be categorized in order to to control and predict future behavior or external events. STORE models show how arbitrary event sequences may be invariantly stored, including repeated events. A preprocessor interacts with the working memory to represent event repeats in spatially separate locations. It is shown why at least two processing levels are needed to invariantly store events presented with variable durations and interstimulus intervals. It is also shown how network parameters control the type and shape of primacy, recency, or bowed temporal order gradients that will be stored.Air Force Office of Scientific Research (90-0128, F49620-92-J-0225); Office of Naval Research (N00014-91-J-4100, N00014-92-J-1309); British Petroleum (89A-1204); Advanced Research Projects Agency (90-0083, N00014-92-J-4015); National Science Foundation (IRI-90-00539
- …