Teaching a humanoid robot to reach for a
visual target is a complex problem in part because
of the high dimensionality of the control
space. In this paper, we demonstrate a biologically
plausible simplification of the reaching
process that replaces the degrees of freedom
in the neck of the robot with sensory readings
from a vestibular system. We show that
this simplification introduces errors that are
easily overcome by a standard learning algorithm.
Furthermore, the errors that are necessarily
introduced by this simplification result
in reaching trajectories that are curved in the
same way as human reaching trajectories