15 research outputs found

    The human motor system alters its reaching movement plan for task-irrelevant, positional forces.

    Get PDF
    The minimum intervention principle and the uncontrolled manifold hypothesis state that our nervous system only responds to force perturbations and sensorimotor noise if they affect task success. This idea has been tested in muscle and joint coordinate frames and more recently using workspace redundancy (e.g., reaching to large targets). However, reaching studies typically involve spatial and or temporal constraints. Constrained reaches represent a small proportion of movements we perform daily and may limit the emergence of natural behavior. Using more relaxed constraints, we conducted two reaching experiments to test the hypothesis that humans respond to task-relevant forces and ignore task-irrelevant forces. We found that participants responded to both task-relevant and -irrelevant forces. Interestingly, participants experiencing a task-irrelevant force, which simply pushed them into a different area of a large target and had no bearing on task success, changed their movement trajectory prior to being perturbed. These movement trajectory changes did not counteract the task-irrelevant perturbations, as shown in previous research, but rather were made into new areas of the workspace. A possible explanation for this behavior change is that participants were engaging in active exploration. Our data have implications for current models and theories on the control of biological motion

    Functional Plasticity in Somatosensory Cortex Supports Motor Learning by Observing.

    Get PDF
    An influential idea in neuroscience is that the sensory-motor system is activated when observing the actions of others [1, 2]. This idea has recently been extended to motor learning, in which observation results in sensory-motor plasticity and behavioral changes in both motor and somatosensory domains [3-9]. However, it is unclear how the brain maps visual information onto motor circuits for learning. Here we test the idea that the somatosensory system, and specifically primary somatosensory cortex (S1), plays a role in motor learning by observing. In experiment 1, we applied stimulation to the median nerve to occupy the somatosensory system with unrelated inputs while participants observed a tutor learning to reach in a force field. Stimulation disrupted motor learning by observing in a limb-specific manner. Stimulation delivered to the right arm (the same arm used by the tutor) disrupted learning, whereas left arm stimulation did not. This is consistent with the idea that a somatosensory representation of the observed effector must be available during observation for learning to occur. In experiment 2, we assessed S1 cortical processing before and after observation by measuring somatosensory evoked potentials (SEPs) associated with median nerve stimulation. SEP amplitudes increased only for participants who observed learning. Moreover, SEPs increased more for participants who exhibited greater motor learning following observation. Taken together, these findings support the idea that motor learning by observing relies on functional plasticity in S1. We propose that visual signals about the movements of others are mapped onto motor circuits for learning via the somatosensory system

    Neural signatures of reward and sensory error feedback processing in motor learning

    No full text
    © 2019 the American Physiological Society. At least two distinct processes have been identified by which motor commands are adapted according to movement-related feedback: reward-based learning and sensory error-based learning. In sensory error-based learning, mappings between sensory targets and motor commands are recalibrated according to sensory error feedback. In reward-based learning, motor commands are associated with subjective value, such that successful actions are reinforced. We designed two tasks to isolate reward- and sensory error-based motor adaptation, and we used electroencephalography in humans to identify and dissociate the neural correlates of reward and sensory error feedback processing. We designed a visuomotor rotation task to isolate sensory error-based learning that was induced by altered visual feedback of hand position. In a reward learning task, we isolated reward-based learning induced by binary reward feedback that was decoupled from the visual target. A fronto-central eventrelated potential called the feedback-related negativity (FRN) was elicited specifically by binary reward feedback but not sensory error feedback. A more posterior component called the P300 was evoked by feedback in both tasks. In the visuomotor rotation task, P300 amplitude was increased by sensory error induced by perturbed visual feedback and was correlated with learning rate. In the reward learning task, P300 amplitude was increased by reward relative to nonreward and by surprise regardless of feedback valence. We propose that during motor adaptation the FRN specifically reflects a reward-based learning signal whereas the P300 reflects feedback processing that is related to adaptation more generally. NEW & NOTEWORTHY We studied the event-related potentials evoked by feedback stimuli during motor adaptation tasks that isolate reward- and sensory error-based learning mechanisms. We found that the feedback-related negativity was specifically elicited by binary reward feedback, whereas the P300 was observed in both tasks. These results reveal neural processes associated with different learning mechanisms and elucidate which classes of errors, from a computational standpoint, elicit the feedback-related negativity and P300

    Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.

    No full text
    It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback

    Average pattern of compensation of each group to the skewed lateral shift probability distribution for A) separate bins of trials and B) averaged across the last 400 trials.

    No full text
    <p>In <b>A)</b>, all experimental trials are represented in the line graph (bins 1–50), where each point represents the average of 10 trials. The circles at bin 0 represent the average of trials 1–3 and are displayed to show the similarity between groups immediately after perturbation onset. In both <b>A)</b> and <b>B)</b>, the upper dashed line represents the optimal compensation (location to aim the hand) that maximizes the probability of hitting the target , based on the movement variability of the Reinforcement group. The lower dashed line represents the optimal compensation (location to aim the hand) that minimizes squared error . These findings suggest that the sensorimotor system heavily weights error feedback over reinforcement feedback when both forms of feedback are available. Error bars represent ±1 standard error of the mean. * p < 0.05.</p

    <i>Experiments 1 and 2</i>: Participants held the handle of a robotic arm with their right hand.

    No full text
    <p>A semi-silvered mirror reflected the image (visual targets, visual feedback) from an LCD screen (not shown) onto a horizontal plane aligned with the shoulder. Participants made forward reaches from a home position, attempted to move through a visual target and stopped once their hand passed over a horizontal line that disappeared when crossed. Error (visual) feedback and reinforcement (target expands, pleasant sound and monetary reward) feedback was laterally shifted relative to true hand position. The magnitude of any particular lateral shift was drawn from a skewed probability distribution. Participants had to compensate for the lateral shifts to hit the target. Compensation represents how laterally displaced their hand was relative to the displayed target. <i>Experiment 1</i>: Laterally shifted error feedback was flashed halfway through each reach as a single dot (<i>ς</i><sub>0<i>mm</i></sub>), a medium cloud of dots (<i>ς</i><sub>15<i>mm</i></sub>), a large cloud of dots (<i>ς</i><sub>30<i>mm</i></sub>), or withheld (<i>ς</i><sub>∞</sub>). The cursor and hand (not visible) paths shown above illustrate compensation that depended on the amount of visual uncertainty (<i>ς</i><sub>0<i>mm</i></sub> and <i>ς</i><sub>15<i>mm</i></sub> conditions shown). In the single dot (<i>ς</i><sub>0<i>mm</i></sub>) condition, participants received additional feedback (error or error + reinforcement) at the target. <i>Experiment 2</i>: Participants were provided with error feedback and or reinforcement feedback only at the target.</p

    Dissociating error-based and reinforcement-based loss functions during sensorimotor learning - Fig 3

    No full text
    <p>Average pattern of compensation (filled circles) in Experiment 1 to different magnitudes of lateral shift (<i>x</i>-axes) and visual uncertainty (separate lines) of participants receiving <b>A)</b> error feedback laterally shifted by skewed-right probability distribution, <b>B)</b> error feedback laterally shifted by a skewed-left probability distribution (note: these data are ‘flipped’ to visually align with the other groups), and <b>C)</b> both reinforcement and error feedback laterally shifted by a skewed-right probability distribution. A darker shade of blue signifies greater visual uncertainty. <b>D)</b> The best-fit power loss-function exponent (<i>α</i><sup><i>opt</i></sup>) of a Bayesian model that, in addition to characterizing how error was minimized, was also a sensitive metric to whether participants were influenced by reinforcement feedback (see <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005623#sec009" target="_blank">Methods</a>). An exponent of 2.0 corresponds to minimizing squared error (upper dashed line), while an exponent of 1.0 corresponds to minimizing absolute error (lower dashed line). We found no significant differences in either the compensation (<i>p</i> = 0.956) or <i>α</i><sup><i>opt</i></sup> (<i>p</i> = 0.187) between groups. These findings suggest that all groups minimized approximately squared error, and that the sensorimotor system heavily weights error feedback over reinforcement feedback when both forms of feedback are available. Error bars represent ±1 standard error of the mean.</p

    Group comparisons, p-values and effect sizes (), were robust to whether the last 100, 200, 300 and 400 trials were averaged together.

    No full text
    <p>Group comparisons, p-values and effect sizes (), were robust to whether the last 100, 200, 300 and 400 trials were averaged together.</p

    Pattern of compensation of a typical participant in the: A) reinforcement group, B) reinforcement + error group, and C) error group.

    No full text
    <p>All experimental trials are represented in the line graph (bins 1–50), where each point represents the average of 10 trials. The circles at bin 0 represent the average of the first three trials and show each participant’s behaviour immediately after perturbation onset, which was similar across groups (see <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005623#pcbi.1005623.g006" target="_blank">Fig 6</a>). For each group, the upper dashed line represents the optimal compensation (location to aim the hand) that maximizes the probability of hitting the target []. The lower dashed line represents the optimal compensation (location to aim the hand) that minimizes squared error []. It can be seen that the Reinforcement participant had a pattern of compensation that on average maximized target hits. Conversely, both the Error participant and the Reinforcement + Error participant learned a compensation that on average minimized approximately squared error. This behavior was consistent across participants. Error bars represent ±1 standard deviation.</p
    corecore