97,252 research outputs found

    Vector Associative Maps: Unsupervised Real-time Error-based Learning and Control of Movement Trajectories

    Full text link
    This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.National Science Foundation (IRI-87-16960, IRI-87-6960); Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083

    Integrating reinforcement learning, equilibrium points and minimum variance to understand the development of reaching: a computational model

    Get PDF
    Despite the huge literature on reaching behaviour we still lack a clear idea about the motor control processes underlying its development in infants. This article contributes to overcome this gap by proposing a computational model based on three key hypotheses: (a) trial-anderror learning processes drive the progressive development of reaching; (b) the control of the movements based on equilibrium points allows the model to quickly find the initial approximate solution to the problem of gaining contact with the target objects; (c) the request of precision of the end-movement in the presence of muscular noise drives the progressive refinement of the reaching behaviour. The tests of the model, based on a two degrees of freedom simulated dynamical arm, show that it is capable of reproducing a large number of empirical findings, most deriving from longitudinal studies with children: the developmental trajectory of several dynamical and kinematic variables of reaching movements, the time evolution of submovements composing reaching, the progressive development of a bell-shaped speed profile, and the evolution of the management of redundant degrees of freedom. The model also produces testable predictions on several of these phenomena. Most of these empirical data have never been investigated by previous computational models and, more importantly, have never been accounted for by a unique model. In this respect, the analysis of the model functioning reveals that all these results are ultimately explained, sometimes in unexpected ways, by the same developmental trajectory emerging from the interplay of the three mentioned hypotheses: the model first quickly learns to perform coarse movements that assure a contact of the hand with the target (an achievement with great adaptive value), and then slowly refines the detailed control of the dynamical aspects of movement to increase accuracy

    Reinforcement learning control of a biomechanical model of the upper extremity

    Get PDF
    Among the infinite number of possible movements that can be produced, humans are commonly assumed to choose those that optimize criteria such as minimizing movement time, subject to certain movement constraints like signal-dependent and constant motor noise. While so far these assumptions have only been evaluated for simplified point-mass or planar models, we address the question of whether they can predict reaching movements in a full skeletal model of the human upper extremity. We learn a control policy using a motor babbling approach as implemented in reinforcement learning, using aimed movements of the tip of the right index finger towards randomly placed 3D targets of varying size. We use a state-of-the-art biomechanical model, which includes seven actuated degrees of freedom. To deal with the curse of dimensionality, we use a simplified second-order muscle model, acting at each degree of freedom instead of individual muscles. The results confirm that the assumptions of signal-dependent and constant motor noise, together with the objective of movement time minimization, are sufficient for a state-of-the-art skeletal model of the human upper extremity to reproduce complex phenomena of human movement, in particular Fitts' Law and the 2/3 Power Law. This result supports the notion that control of the complex human biomechanical system can plausibly be determined by a set of simple assumptions and can easily be learned.Comment: 19 pages, 7 figure

    A Theory of Impedance Control based on Internal Model Uncertainty

    Get PDF
    Efficient human motor control is characterised by an extensive use of joint impedance modulation, which to a large extent is achieved by co-contracting antagonistic muscle pairs in a way that is beneficial to the specific task. Studies in single and multi joint limb reaching movements revealed that joint impedance is increased with faster movements [1] as well as with higher positional accuracy demands [2]. A large body of experimental work has investigated the motor learning processes in tasks with changing dynamics conditions (e.g., [3]) and it has been shown that subjects generously make use of impedance control to counteract destabilising external force fields (FF). In the early stage of dynamics learning humans tend to increase co-contraction. As learning progresses in consecutive reaching trials, a reduction in co-contraction with a parallel reduction of the reaching errors made can be observed. While there is much experimental evidence available for the use of impedance control in the CNS, no generally-valid computational model of impedance control derived from first principles have been proposed so far. Many of the proposed computational models have either focused on the biomechanical aspects of impedance control [4] or have proposed simple low level mechanisms to try to account for observed human co-activation patterns [3]. However these models are of a rather descriptive nature and do not provide us with a general and principled theory of impedance control in the nervous system

    Characterizing the Sensorimotor Properties of a Rapid Visuomotor Reach Movement on Human Upper Limb Muscles

    Get PDF
    Humans and other primates rely heavily on vision as a primary sensory input to drive our upcoming volitional motor actions. Our motor system makes so many of these visual-to-motor transformations that they become ubiquitous in our daily lives. However, a central question in systems neuroscience is how does the brain perform these transformations? Reaching movements have been an ideal model for studying volitional motor control in primates. Broadly, these visually-guided reach movements contain three inherent sensorimotor components: an action selection component, a motor execution component, and a motor learning component. A core assumption is that as reach movements become more complex, our motor system requires more cortical processing, which prolongs the time between stimulus onset and reach initiation. Typically, visually-guided reach movements occur within 200-300 ms after the onset of a visual stimulus. Previous human behavioural studies have shown that prior to these volitional reach movements, a directionally-tuned neuromuscular response can also be detected on human upper limb muscles within 100 ms after the onset of a novel visual stimulus. In this thesis, I characterized the sensorimotor properties of this visual stimulus-locked response (SLR), under the same framework that has been used to describe volitional motor control. In Chapter 2, I showed that the SLR is an ‘automatic’ motor command generated towards the visual stimulus location regardless of the current task demands. In Chapter 3, by changing the initial starting hand position and the pre-planned reach trajectory, I showed that like volitional control, the pathway mediating the SLR can rapidly transform the eye-centric visual stimuli into a proper hand-centric motor command. In Chapter 4, I showed that the directional tuning of the SLR is influenced by motor learning. However unlike volitional control, the SLR is only influenced by the implicit, but not explicit, component of motor learning. Thus, the results from this thesis suggest that despite the reflexive nature of the SLR, the SLR shares some sensorimotor properties that have been classically reserved for volitional motor control

    Unsupervised learning of vocal tract sensory-motor synergies

    Get PDF
    The degrees of freedom problem is ubiquitous within motor control arising out of the redundancy inherent in motor systems and raises the question of how control actions are determined when there exist infinitely many ways to perform a task. Speech production is a complex motor control task and suffers from this problem, but it has not drawn the research attention that reaching movements or walking gaits have. Motivated by the use of dimensionality reduction algorithms in learning muscle synergies and perceptual primitives that reflect the structure in biological systems, an approach to learning sensory-motor synergies via dynamic factor analysis for control of a simulated vocal tract is presented here. This framework is shown to mirror the articulatory phonology model of speech production and evidence is provided that articulatory gestures arise from learning an optimal encoding of vocal tract dynamics. Broad phonetic categories are discovered within the low-dimensional factor space indicating that sensory-motor synergies will enable application of reinforcement learning to the problem of speech acquisition

    Non Parametric Learning of Sensori-Motor Maps. Application to the Control of Multi Joint Systems

    Get PDF
    Abstract: -At the light of control and learning theories, this paper addresses the question of controlling multi-joint system using sensory feedback. A generic Sensory-Motor Control Model (SMCM) is firstly presented that solves the inverse kinematics difficulty at a theoretical level. Computational implementations of SMCM requires the knowledge of sensory motor transforms that are directly dependent to the multi-joint structure that is to be controlled. To avoid the dependency of SMCM to the analytical knowledge of these transforms, a non parametric learning approach is developed to identify non linear mappings between sensory signals and motor commands involved in SMCM. The resulting adaptive SMCM (ASMCM) is intensively tested within the scope of hand-arm reaching movements. ASMCM shows to be very effective and robust at least for this task. Its generic properties and effectiveness allow to foresee wide area of application

    Reduction in Learning Rates Associated with Anterograde Interference Results from Interactions between Different Timescales in Motor Adaptation

    Get PDF
    Prior experiences can influence future actions. These experiences can not only drive adaptive changes in motor output, but they can also modulate the rate at which these adaptive changes occur. Here we studied anterograde interference in motor adaptation – the ability of a previously learned motor task (Task A) to reduce the rate of subsequently learning a different (and usually opposite) motor task (Task B). We examined the formation of the motor system's capacity for anterograde interference in the adaptive control of human reaching-arm movements by determining the amount of interference after varying durations of exposure to Task A (13, 41, 112, 230, and 369 trials). We found that the amount of anterograde interference observed in the learning of Task B increased with the duration of Task A. However, this increase did not continue indefinitely; instead, the interference reached asymptote after 15–40 trials of Task A. Interestingly, we found that a recently proposed multi-rate model of motor adaptation, composed of two distinct but interacting adaptive processes, predicts several key features of the interference patterns we observed. Specifically, this computational model (without any free parameters) predicts the initial growth and leveling off of anterograde interference that we describe, as well as the asymptotic amount of interference that we observe experimentally (R2 = 0.91). Understanding the mechanisms underlying anterograde interference in motor adaptation may enable the development of improved training and rehabilitation paradigms that mitigate unwanted interference
    • …
    corecore