7 research outputs found
Reinforcement learning control of a biomechanical model of the upper extremity
Among the infinite number of possible movements that can be produced, humans
are commonly assumed to choose those that optimize criteria such as minimizing
movement time, subject to certain movement constraints like signal-dependent
and constant motor noise. While so far these assumptions have only been
evaluated for simplified point-mass or planar models, we address the question
of whether they can predict reaching movements in a full skeletal model of the
human upper extremity. We learn a control policy using a motor babbling
approach as implemented in reinforcement learning, using aimed movements of the
tip of the right index finger towards randomly placed 3D targets of varying
size. We use a state-of-the-art biomechanical model, which includes seven
actuated degrees of freedom. To deal with the curse of dimensionality, we use a
simplified second-order muscle model, acting at each degree of freedom instead
of individual muscles. The results confirm that the assumptions of
signal-dependent and constant motor noise, together with the objective of
movement time minimization, are sufficient for a state-of-the-art skeletal
model of the human upper extremity to reproduce complex phenomena of human
movement, in particular Fitts' Law and the 2/3 Power Law. This result supports
the notion that control of the complex human biomechanical system can plausibly
be determined by a set of simple assumptions and can easily be learned.Comment: 19 pages, 7 figure
Breathing Life Into Biomechanical User Models
Forward biomechanical simulation in HCI holds great promise as a tool for evaluation, design, and engineering of user interfaces. Although reinforcement learning (RL) has been used to simulate biomechanics in interaction, prior work has relied on unrealistic assumptions about the control problem involved, which limits the plausibility of emerging policies. These assumptions include direct torque actuation as opposed to muscle-based control; direct, privileged access to the external environment, instead of imperfect sensory observations; and lack of interaction with physical input devices. In this paper, we present a new approach for learning muscle-actuated control policies based on perceptual feedback in interaction tasks with physical input devices. This allows modelling of more realistic interaction tasks with cognitively plausible visuomotor control. We show that our simulated user model successfully learns a variety of tasks representing different interaction methods, and that the model exhibits characteristic movement regularities observed in studies of pointing. We provide an open-source implementation which can be extended with further biomechanical models, perception models, and interactive environments.publishedVersio