research

A Self-Organizing Neural Network Architecture for Navigation Using Optic Flow

Abstract

This paper describes a self-organizing neural network architecture that transforms optic now information into representations of heading, scene depth, and moving object locations. These representations are used to reactively navigate in simulations involving obstacle avoidance and pursuit of a moving target. The network's weights are trained during an action-perception cycle in which self-generated eye and body movements produce optic flow information, thus allowing the network to tunc itself without requiring explicit knowledge of sensor geometry. The confounding effect of eye movement during translation is suppressed by learning the relationship between eye movement outflow commands and the optic flow signals that they induce. The remaining optic flow field is due only to observer translation and independent motion of objects in the scene. A self-organizing feature map categorizes normalized translational flow patterns, thereby creating a map of cells that code heading directions. Heading information is then recombined with translational flow patterns in two different ways to form maps of scene depth and moving object locations. All learning processes take place concurrently and require no external "teachers." Simulations of the network verify its performance using both noise-free and noisy optic flow information.Office of Naval Research (N00014-92-J-4015, N00014-92-J-4100, N00014-92-J-1309, N00014-92-I-0657, N00014-95-I-0409); Air Force Office of Scientific Research (F49620-92-J-0499); Alfred P. Sloan Foundatio

Similar works

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.