7 research outputs found

    Time-Integrated Position Error Accounts for Sensorimotor Behavior in Time-Constrained Tasks

    Get PDF
    Several studies have shown that human motor behavior can be successfully described using optimal control theory, which describes behavior by optimizing the trade-off between the subject's effort and performance. This approach predicts that subjects reach the goal exactly at the final time. However, another strategy might be that subjects try to reach the target position well before the final time to avoid the risk of missing the target. To test this, we have investigated whether minimizing the control effort and maximizing the performance is sufficient to describe human motor behavior in time-constrained motor tasks. In addition to the standard model, we postulate a new model which includes an additional cost criterion which penalizes deviations between the position of the effector and the target throughout the trial, forcing arrival on target before the final time. To investigate which model gives the best fit to the data and to see whether that model is generic, we tested both models in two different tasks where subjects used a joystick to steer a ball on a screen to hit a target (first task) or one of two targets (second task) before a final time. Noise of different amplitudes was superimposed on the ball position to investigate the ability of the models to predict motor behavior for different levels of uncertainty. The results show that a cost function representing only a trade-off between effort and accuracy at the end time is insufficient to describe the observed behavior. The new model correctly predicts that subjects steer the ball to the target position well before the final time is reached, which is in agreement with the observed behavior. This result is consistent for all noise amplitudes and for both tasks

    Behavioral/Systems/Cognitive Visuomotor Coordination Is Different for Different Directions in Three-Dimensional Space

    No full text
    In most visuomotor tasks in which subjects have to reach to visual targets or move the hand along a particular trajectory, eye movements have been shown to lead hand movements. Because the dynamics of vergence eye movements is different from that of smooth pursuit and saccades, we have investigated the lead time of gaze relative to the hand for the depth component (vergence) and in the frontal plane (smooth pursuit and saccades) in a tracking task and in a tracing task in which human subjects were instructed to move the finger along a 3D path. For tracking, gaze leads finger position on average by 28 � 6 ms (mean � SE) for the components in the frontal plane but lags finger position by 95 � 39 ms for the depth dimension. For tracing, gaze leads finger position by 151 � 36 ms for the depth dimension. For the frontal plane, the mean lead time of gaze relative to the hand is 287 � 13 ms. However, we found that the lead time in the frontal plane was inversely related to the tangential velocity of finger. This inverse relation for movements in the frontal plane could be explained by assuming that gaze leads the finger by a constant distance of �2.6 cm (range of 1.5–3.6 cm across subjects)

    Behavior and model predictions for the two-target task.

    No full text
    <p>See <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0033724#pone-0033724-g002" target="_blank">figure 2</a> for details.</p

    Model performance for the one-target task.

    No full text
    <p>Test error of standard model minus test error of extended model for all subjects and noise amplitudes. Values are given as the median over 100 cross-validation runs. The lower and upper error bars represent the 25 and 75 percentile, respectively. A positive value means that the extended model gave a better fit than the standard model. A value of zero means that there was no difference between the models. Conditions for which the extended model gave a significantly better prediction than the standard model are indicated by or . Subject S5 was discarded (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0033724#s2" target="_blank"><i>Methods</i></a>).</p

    Schematic of the tasks.

    No full text
    <p>(a) In the one-target task (left panel), subjects had to control a joystick to move a ball on a screen to hit a target (rectangle) at time s. The ball (circle) started at a random vertical position between and 0.5 at the left of the screen and moved at a constant horizontal velocity to the right. Subjects could move the ball up- or downwards. Gaussian white noise was superimposed on the vertical ball position to introduce uncertainty about future ball positions. The dashed line illustrates the trajectory of the ball. (b) In the two-target task (right panel), two targets were present at vertical positions and 0.5. The ball started at vertical position at the left of the screen. Subjects were asked to steer to one of the targets and they were free to choose which one. All other experimental conditions were exactly the same as for the one-target task. (c) Ball position time traces (100 trials) of subject S6 performing the one-target task with noise amplitude . (d) Same for the two-target task. (e) Control signal time traces corresponding to the ball position time traces in panel c (100 trials) of subject S6 performing the one-target task with noise amplitude . (f) Same for the two-target task.</p

    Behavior and model predictions for the one-target task.

    No full text
    <p>Top panels: average ball position displayed as mean (gray solid line) and standard deviation (gray shaded area) for all noise amplitudes (rows) and subjects (columns). The black dashed and solid line represent the average fit of the standard model and the extended model, respectively. Bottom panels: same for the control signal. Subject S5 was discarded (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0033724#s2" target="_blank">Methods</a>).</p

    Predictive mechanisms in the control of contour following

    No full text
    Item does not contain fulltextIn haptic exploration, when running a fingertip along a surface, the control system may attempt to anticipate upcoming changes in curvature in order to maintain a consistent level of contact force. Such predictive mechanisms are well known in the visual system, but have yet to be studied in the somatosensory system. Thus, the present experiment was designed to reveal human capabilities for different types of haptic prediction. A robot arm with a large 3D workspace was attached to the index fingertip and was programmed to produce virtual surfaces with curvatures that varied within and across trials. With eyes closed, subjects moved the fingertip around elliptical hoops with flattened regions or Limaçon shapes, where the curvature varied continuously. Subjects anticipated the corner of the flattened region rather poorly, but for the Limaçon shapes, they varied finger speed with upcoming curvature according to the two-thirds power law. Furthermore, although the Limaçon shapes were randomly presented in various 3D orientations, modulation of contact force also indicated good anticipation of upcoming changes in curvature. The results demonstrate that it is difficult to haptically anticipate the spatial location of an abrupt change in curvature, but smooth changes in curvature may be facilitated by anticipatory predictions.12 p
    corecore