Knowing ones limb location is crucial in order to produce efficient movements. When a movement error is experienced, people account for whether the source of the error is external or internal in nature. When the error is clearly not caused by oneself, it is intuitive to correct for these errors without updating internal models for movement or estimating the position of the effector. That is, there should be reduced or no reliance on implicit learning. However, merely inducing explicit adaptation does not affect measures of implicit learning. Here, we use different visual manipulations that make the external nature of the error clear, and test how these manipulations affect both motor behaviour and hand location estimates.
We manipulate the extent of external error attribution in four ways, while participants learn to perform a 30-degree visuomotor rotation task: a Non-instructed control group that receives neither instructions nor different visual stimuli, an Instructed group that receives a counter strategy for dealing with the rotation, a Cursor Jump group that sees the cursor misalignment mid-reach on every training trial, and a Hand View group that sees both the misaligned cursor and their actual hand on every trial. Although an initial advantage in learning is seen for the Instructed group, performance across all groups are not different by the end of training, suggesting that any effects observed for changes in motor behaviour and hand localization are due to the manipulations.
During reaches without visual feedback about the cursor location, participants are instructed to perform reaching movements, while either including or excluding any strategy they may have developed during adaptation training to counter for the visuomotor rotation. All groups show awareness of the nature of the perturbation except for the Non-instructed group. Implicit changes in motor behaviour, measured with reach aftereffects, persist for all groups but are greatly reduced for the Hand View group. For hand localization, participants either generate their own movement (allowing for hand localization with both afferent-based proprioceptive information and efferent-based predictions of sensory consequences) or a robot moves their hand (allowing for only proprioceptive information). We find that afferent-based changes in hand localization persists across all groups, but efferent-based changes are reduced for only the Hand View group. These results show that the brain incorporates source attribution for estimating the position of the effector during motor learning, and that proprioceptive recalibration during hand localization is an implicit process impervious to external error attribution