12 research outputs found

    Distinct Haptic Cues Do Not Reduce Interference when Learning to Reach in Multiple Force Fields

    Get PDF
    Background: Previous studies of learning to adapt reaching movements in the presence of novel forces show that learning multiple force fields is prone to interference. Recently it has been suggested that force field learning may reflect learning to manipulate a novel object. Within this theoretical framework, interference in force field learning may be the result of static tactile or haptic cues associated with grasp, which fail to indicate changing dynamic conditions. The idea that different haptic cues (e.g. those associated with different grasped objects) signal motor requirements and promote the learning and retention of multiple motor skills has previously been unexplored in the context of force field learning. Methodology/Principle Findings: The present study tested the possibility that interference can be reduced when two different force fields are associated with differently shaped objects grasped in the hand. Human subjects were instructed to guide a cursor to targets while grasping a robotic manipulandum, which applied two opposing velocity-dependent curl fields to the hand. For one group of subjects the manipulandum was fitted with two different handles, one for each force field. No attenuation in interference was observed in these subjects relative to controls who used the same handle for both force fields. Conclusions/Significance: These results suggest that in the context of the present learning paradigm, haptic cues on their own are not sufficient to reduce interference and promote learning multiple force fields

    Movement perpendicular distance for subjects who grasped the same handle in three consecutive blocks of the CWFF.

    No full text
    <p>Each data point represents the mean perpendicular distance over 6 movements, averaged over subjects.</p

    Movement perpendicular distance is shown over the course of movements in the CWFF, CCWFF and CWFF.

    No full text
    <p>Data plotted in dark grey represent subjects who grasped the same handle in all three sessions. Data plotted in light grey represent subjects who grasped a given handle shape for the CWFF and a different handle shape for the CCWFF. Each data point represents the mean perpendicular distance over 6 movements, averaged over subjects.</p

    Experimental setup.

    No full text
    <p>Subjects grasped the handle of the InMotion robotic device (Interactive Motion Technologies, Cambridge, MA) with their right arm abducted at the shoulder and supported by a custom made air sled. Subjects produced horizontal-plane arm movements involving shoulder and elbow rotation to guide the motion of a cursor to a series of visual targets, projected using a computer controlled LCD projector onto a screen suspended 20 cm above the hand and reflected into view by a semi-silvered mirror positioned 10 cm below the screen. This created the illusion that the targets were positioned in the plane of the subject's arm movements.</p

    Role of Cocontraction in Arm Movement Accuracy

    No full text
    corecore