41 research outputs found

    Online reach adjustments induced by real-time movement sonification

    Get PDF
    Movement sonification can improve motor control in both healthy subjects (e.g., learning or refining a sport skill) and those with sensorimotor deficits (e.g., stroke patients and deafferented individuals). It is not known whether improved motor control and learning from movement sonification are driven by feedback-based real-time (“online”) trajectory adjustments, adjustments to internal models over multiple trials, or both. We searched for evidence of online trajectory adjustments (muscle twitches) in response to movement sonification feedback by comparing the kinematics and error of reaches made with online (i.e., real-time) and terminal sonification feedback. We found that reaches made with online feedback were significantly more jerky than reaches made with terminal feedback, indicating increased muscle twitching (i.e., online trajectory adjustment). Using a between-subject design, we found that online feedback was associated with improved motor learning of a reach path and target over terminal feedback; however, using a within-subjects design, we found that switching participants who had learned with online sonification feedback to terminal feedback was associated with a decrease in error. Thus, our results suggest that, with our task and sonification, movement sonification leads to online trajectory adjustments which improve internal models over multiple trials, but which themselves are not helpful online corrections

    The impact of visually simulated self-motion on predicting object motion-A registered report protocol.

    No full text
    To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigate this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants are shown a ball moving laterally which disappears after a certain time. They then indicate by button press when they think the ball would have hit a target rectangle positioned in the environment. While the ball is visible, participants sometimes experience simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task is a two-interval forced choice task in which participants judge which of two motions is faster: in one interval they see the same ball they observed in the first task while in the other they see a ball cloud whose speed is controlled by a PEST staircase. While observing the single ball, they are again moved visually either in the same or opposite direction as the ball or they remain static. We expect participants to overestimate the speed of a ball that moves opposite to their simulated self-motion (speed estimation task), which should then lead them to underestimate the time it takes the ball to reach the target rectangle (prediction task). Seeing the ball during visually simulated self-motion should increase variability in both tasks. We expect to find performance in both tasks to be correlated, both in accuracy and precision

    Fig 5 -

    No full text
    A. Relationship between the difference in PSEs between the Opposite Directions motion profile and the Observer Static motion profile in the speed estimation task (x axis) and the difference in predicted durations between these motion profiles (y axis). One data point corresponds to one participant. B. As A., but for the relation between the JND differences in the speed estimation task between the “Opposite Directions” motion profile and the “Observer Static” motion profile and the differences in standard deviations between these motion profiles.</p

    Fig 6 -

    No full text
    Simulated power for the prediction task (A), the speed estimation task (B) and the correlation between performance in speed estimation and speed prediction (C), separately for the statistical tests referring to biases (accuracy) and variability (precision). The number of participants for which we simulated power is on the x axis, while the number of trials for each task is coded with different shades of green and line types. The horizontal lines indicate a power level of 0.8, 0.9 and 0.95 respectively.</p

    Fig 6 -

    No full text
    Simulated power for the prediction task (A), the speed estimation task (B) and the correlation between performance in speed estimation and speed prediction (C), separately for the statistical tests referring to biases (accuracy) and variability (precision). The number of participants for which we simulated power is on the x axis, while the number of trials for each task is coded with different shades of green and line types. The horizontal lines indicate a power level of 0.8, 0.9 and 0.95 respectively.</p

    Results of the statistical tests (in terms of means and p values) performed over the fitted parameters capturing the effect of self-motion on different dependent variables.

    No full text
    We shaded the results in red when they are significantly different from zero and negative and blue when they are significantly different from zero and positive. For precision, values above zero signify an increase variability, i.e., a decrease in precision, and values below zero signify a decrease in variability, i.e., an increase in precision.</p

    Fig 4 -

    No full text
    A. Predicted PSEs (y axis) for each ball speed (x axis) and motion profile (color-coded; left-most: “Observer Static”; in the middle: “Opposite Directions”; right-most “Same Directions”). B. As A. but for the predicted JNDs.</p

    S1 Appendix -

    No full text
    (DOCX)</p

    Fig 4 -

    No full text
    A. Predicted PSEs (y axis) for each ball speed (x axis) and motion profile (color-coded; left-most: “Observer Static”; in the middle: “Opposite Directions”; right-most “Same Directions”). B. As A. but for the predicted JNDs.</p

    Overview over the results relating to all hypotheses.

    No full text
    Overview over the results relating to all hypotheses.</p
    corecore