421 research outputs found

    A Network for Learning Kinematics with Application to Human Reaching Models

    Full text link
    A model for self-organization of the coordinate transformations required for spatial reaching is presented. During a motor babbling phase, a mapping from spatial coordinate directions to joint motion directions is learned. After learning, the model is able to produce straight-line spatial velocity trajectories with characteristic bell-shaped spatial velocity profiles, as observed in human reaches. Simulation results are presented for transverse plane reaching using a two degree-of-freedom arm.Office of Naval Research (N00014-92-J-1309

    Neurally Plausible Model of Robot Reaching Inspired by Infant Motor Babbling

    Get PDF
    In this dissertation, we present an abstract model of infant reaching that is neurally-plausible. This model is grounded in embodied artificial intelligence, which emphasizes the importance of the sensorimotor interaction of an agent and the world. It includes both learning sensorimotor correlations through motor babbling and also arm motion planning using spreading activation. We introduce a mechanism called bundle formation as a way to generalize motions during the motor babbling stage. We then offer a neural model for the abstract model, which is composed of three layers of neural maps with parallel structures representing the same sensorimotor space. The motor babbling period shapes the structure of the three neural maps as well as the connections within and between them; these connections encode trajectory bundles in the neural maps. We then investigate an implementation of the neural model using a reaching task on a humanoid robot. Through a set of experiments, we were able to find the best way to implement different components of this model such as motor babbling, neural representation of sensorimotor space, dimension reduction, path planning, and path execution. After the proper implementation had been found, we conducted another set of experiments to analyze the model and evaluate the planned motions. We evaluated unseen reaching motions using jerk, end effector error, and overshooting. In these experiments, we studied the effect of different dimensionalities of the reduced sensorimotor space, different bundle widths, and different bundle structures on the quality of arm motions. We hypothesized a larger bundle width would allow the model to generalize better. The results confirmed that the larger bundles lead to a smaller error of end-effector position for testing targets. An experiment with the resolution of neural maps showed that a neural map with a coarse resolution produces less smooth motions compared to a neural map with a fine resolution. We also compared the unseen reaching motions under different dimensionalities of the reduced sensorimotor space. The results showed that a smaller dimension leads to less smooth and accurate movements

    From motor babbling to hierarchical learning by imitation: a robot developmental pathway

    Get PDF
    How does an individual use the knowledge acquired through self exploration as a manipulable model through which to understand others and benefit from their knowledge? How can developmental and social learning be combined for their mutual benefit? In this paper we review a hierarchical architecture (HAMMER) which allows a principled way for combining knowledge through exploration and knowledge from others, through the creation and use of multiple inverse and forward models. We describe how Bayesian Belief Networks can be used to learn the association between a robot’s motor commands and sensory consequences (forward models), and how the inverse association can be used for imitation. Inverse models created through self exploration, as well as those from observing others can coexist and compete in a principled unified framework, that utilises the simulation theory of mind approach to mentally rehearse and understand the actions of others

    Exploiting Vestibular Output during Learning Results in Naturally Curved Reaching Trajectories

    Get PDF
    Teaching a humanoid robot to reach for a visual target is a complex problem in part because of the high dimensionality of the control space. In this paper, we demonstrate a biologically plausible simplification of the reaching process that replaces the degrees of freedom in the neck of the robot with sensory readings from a vestibular system. We show that this simplification introduces errors that are easily overcome by a standard learning algorithm. Furthermore, the errors that are necessarily introduced by this simplification result in reaching trajectories that are curved in the same way as human reaching trajectories

    Vector Associative Maps: Unsupervised Real-time Error-based Learning and Control of Movement Trajectories

    Full text link
    This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.National Science Foundation (IRI-87-16960, IRI-87-6960); Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083

    A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm

    Full text link
    This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI 90-24877

    Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position

    Full text link
    A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309
    corecore