5 research outputs found

    Sensory-motor control scheme based on Kohonen Maps and AVITE model

    Get PDF
    In this paper a control scheme is proposed for anticipatory and discrete event sensory motor driven. The control system elaborates a sensory-motor program based on its internal data and a high level objective. The global system implements several biological models in each one of its subsystems. One of the novelties of this work is the structure of the sensory-motor algorithms which are based on cell distributions of the sensorial spaces. Indeed, the correspondence between cells and the sensorial spaces is distributed by Kohonen maps. This control scheme has been tested on a robotic platform constituted by an industrial robot and a stereohead. More relevant results are presented and analyzed in this paper

    Vector Associative Maps: Unsupervised Real-time Error-based Learning and Control of Movement Trajectories

    Full text link
    This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)路(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.National Science Foundation (IRI-87-16960, IRI-87-6960); Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083

    A new neural architecture based on ART and AVITE models for anticipatory sensory-motor coordination in robotics

    Get PDF
    In this paper a novel sensory-motor neural controller applied to robotic systems for reaching and tracking targets is proposed. It is based on how the human system projects the sensorial stimulus over the motor joints, sending motor commands to each articulation and avoiding, in most phases of the movement, the feedback of the visual information. In this way, the proposed neural architecture autonomously generates a learning cells structure based on the adaptive resonance theory, together with a neural mapping of the sensory-motor coordinate systems in each cell of the arm workspace. It permits a fast openloop control based on propioceptive information of a robot and a precise grasping position in each cell by mapping 3D spatial positions over redundant joints. The proposed architecture has been trained, implemented and tested in a visuo-motor robotic platform. Robustness, precision and velocity characteristics have been validated.This work was supported in part by the SENECA Fundation (Spain) PCMC75/ 00078/FS/02, and the Spanish Science & Technology Ministry (MCYT) under TIC 2003-08164-C03-03 research project

    A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm

    Full text link
    This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI 90-24877

    Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 1

    Get PDF
    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake. The workshop was held June 1-3, 1992 at the Lyndon B. Johnson Space Center in Houston, Texas. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control, and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making
    corecore