8 research outputs found

    Generalization in Adaptation to Stable and Unstable Dynamics

    No full text
    Humans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for every possible future action, the sensorimotor control system generalizes a control strategy for a range of movements based on learning performed over a set of movements. Here, we introduce a computational model for this learning and generalization, which specifies how to learn feedforward muscle activity in a function of the state space. Specifically, by incorporating co-activation as a function of error into the feedback command, we are able to derive an algorithm from a gradient descent minimization of motion error and effort, subject to maintaining a stability margin. This algorithm can be used to learn to coordinate any of a variety of motor primitives such as force fields, muscle synergies, physical models or artificial neural networks. This model for human learning and generalization is able to adapt to both stable and unstable dynamics, and provides a controller for generating efficient adaptive motor behavior in robots. Simulation results exhibit predictions consistent with all experiments on learning of novel dynamics requiring adaptation of force and impedance, and enable us to re-examine some of the previous interpretations of experiments on generalization. © 2012 Kadiallah et al

    Impedance control is tuned to multiple directions of movement

    No full text
    Humans are able to learn tool-handling tasks, such as carving, demonstrating their competency to make and vary the direction of movements in unstable environments. It has been shown that when a single reaching movement is repeated in unstable dynamics, the central nervous system (CNS) learns an impedance internal model to compensate for the environment instability. However, there is still no explanation for how humans can learn to move in various directions in such environments. In this study, we investigated whether and how humans compensate for instability while learning two different reaching movements simultaneously. Results show that when performing movements in two different directions, separated by a 35° angle, the CNS was able to compensate for the unstable dynamics. After adaptation, the force was found to be similar to the free movement condition, but stiffness increased in the direction of instability, specifically for each direction of movement. Our findings suggest that the CNS either learned an internal model generalizing over different movements, or alternatively that it was able to switch between specific models acquired simultaneously. © 2008 IEEE

    Impedance control is selectively tuned to multiple directions of movement

    No full text
    Humans are able to learn tool-handling tasks, such as carving, demonstrating their competency to make movements in unstable environments with varied directions. When faced with a single direction of instability, humans learn to selectively co-contract their arm muscles tuning the mechanical stiffness of the limb end point to stabilize movements. This study examines, for the first time, subjects simultaneously adapting to two distinct directions of instability, a situation that may typically occur when using tools. Subjects learned to perform reaching movements in two directions, each of which had lateral instability requiring control of impedance. The subjects were able to adapt to these unstable interactions and switch between movements in the two directions; they did so by learning to selectively control the end-point stiffness counteracting the environmental instability without superfluous stiffness in other directions. This finding demonstrates that the central nervous system can simultaneously tune the mechanical impedance of the limbs to multiple movements by learning movement-specific solutions. Furthermore, it suggests that the impedance controller learns as a function of the state of the arm rather than a general strategy. © 2011 the American Physiological Society
    corecore