13 research outputs found

    Haptic guidance improves the visuo-manual tracking of trajectories

    Get PDF
    BACKGROUND: Learning to perform new movements is usually achieved by following visual demonstrations. Haptic guidance by a force feedback device is a recent and original technology which provides additional proprioceptive cues during visuo-motor learning tasks. The effects of two types of haptic guidances-control in position (HGP) or in force (HGF)-on visuo-manual tracking ("following") of trajectories are still under debate. METHODOLOGY/PRINCIPALS FINDINGS: Three training techniques of haptic guidance (HGP, HGF or control condition, NHG, without haptic guidance) were evaluated in two experiments. Movements produced by adults were assessed in terms of shapes (dynamic time warping) and kinematics criteria (number of velocity peaks and mean velocity) before and after the training sessions. CONCLUSION/SIGNIFICANCE: These results show that the addition of haptic information, probably encoded in force coordinates, play a crucial role on the visuo-manual tracking of new trajectories

    Influence of visual constraints in the trajectory formation of grasping movements

    No full text
    The main objective of the present study is to show that the visual context can influence the trajectory formation of grasping movements. We asked participants to reach and grasp a cylinder disposed at three different positions: −20◦, 0◦ and 20◦ of eccentricity with respect to the midsagittal axis. Grasping movements were performed in a direct and in an indirect visual feedback condition (i.e., controlled through a vertical video display). Results revealed that for grasping movements directed toward objects located at −20◦ and 0◦, path curvatures of the wrist, the thumb and the index finger were significantly straighter in the indirect visual feedback condition. However, no significant difference concerning hand path curvature was observed when the movement was directed toward the object located at 20◦. This suggests that grasping movements controlled through a remote visual feedback tend to be planned in extrinsic space and that the effect of the visual context on movement planning appears to be not isotropic over the workspace

    Visual and motor constraints on trajectory planning in pointing movements

    No full text
    The aim of the present study was to show that planning and controlling the trajectory of a pointing movement is influenced not solely by physical constraints but also by visual constraints. Subjects were required to point towards different targets located at 20◦, 40◦, 60◦ and 80◦ of eccentricity. Movements were either constrained (i.e. two-dimensional movements) or unconstrained (i.e. three-dimensional movements). Furthermore, movements were carried out either under a direct or a remote visual control (use of a video system). Results revealed that trajectories of constrained movements were nearly straight whatever the eccentricity of the target and the type of visual control. A different patternwas revealed for unconstrained movements. Indeed, under direct vision the trajectory curvature increased as the eccentricity augmented, whereas under indirect vision, trajectories remained nearly straight whatever the eccentricity of the target. Thus, movements controlled through a remote visual feedback appear to be planned in extrinsic space as constrained movements

    Dissociation between "where" and "how" judgements of one's own motor performance in a video-controlled reaching task

    No full text
    The aim of the present study is to show that the sensorimotor system makes a differential use of visual and internal (proprioception and efferent copy) signals when evaluating either the spatial or the dynamical components of our own motor response carried out under a remote visual feedback. Subjects were required to monitor target-directed pointings from the images furnished by a video camera overhanging the workspace. By rotating the camera, the orientation of the movement perceived on the screen was either changed by 45◦ (visual bias) or maintained in conformity with the actual trajectory (0◦). In either condition, after completing twenty pointings, participants had to evaluate their visuomotor performance in two non visual testing: They were both asked to reach the target in a single movement (evaluation of “how to reach the target”), and to evaluate the mapping of the spatial layout where they acted (evaluations of “where the starting position was and, what movement direction was”). Results revealed that though motor performance in the 45◦ conditions was adapted to the visuomotor conflict, participants’ evaluation of the spatial aspect of the performance was affected by the biased visual information. A different pattern was revealed for the evaluation of “how” the target was reached which was not affected by the visual bias. Thus, it is suggested that segregated processing of visual and kinesthetic information occurs depending upon the dimension of the performance that is judged. Visual information prevails when identifying the spatial context of a motor act whereas proprioception and/or efferent copy related signals are privileged when evaluating the dynamical component of the response

    Influence of visual constraints in the trajectory formation of grasping movements

    No full text
    The main objective of the present study is to show that the visual context can influence the trajectory formation of grasping movements. We asked participants to reach and grasp a cylinder disposed at three different positions: −20◦, 0◦ and 20◦ of eccentricity with respect to the midsagittal axis. Grasping movements were performed in a direct and in an indirect visual feedback condition (i.e., controlled through a vertical video display). Results revealed that for grasping movements directed toward objects located at −20◦ and 0◦, path curvatures of the wrist, the thumb and the index finger were significantly straighter in the indirect visual feedback condition. However, no significant difference concerning hand path curvature was observed when the movement was directed toward the object located at 20◦. This suggests that grasping movements controlled through a remote visual feedback tend to be planned in extrinsic space and that the effect of the visual context on movement planning appears to be not isotropic over the workspace

    Visual field plasticity in hearing users of sign language.

    No full text
    Studies have observed that deaf signers have a larger Visual Field (VF) than hearing non-signers with a particular large extension in the lower part of the VF. This increment could stem from early deafness or from the extensive use of sign language, since the lower VF is critical to perceive and understand linguistics gestures in sign language communication. The aim of the present study was to explore the potential impact of sign language experience without deafness on the VF sensitivity within its lower part. Using standard Humphrey Visual Field Analyzer, we compared luminance sensitivity in the fovea and between 3 and 27 degrees of visual eccentricity for the upper and lower VF, between hearing users of French Sign Language and age-matched hearing non-signers. The sensitivity in the fovea and in the upper VF were similar in both groups. Hearing signers had, however, higher luminance sensitivity than non-signers in the lower VF but only between 3 and 15°, the visual location for sign language perception. Sign language experience, no associated with deafness, may then be a modulating factor of VF sensitivity but restricted to the very specific location where signs are perceived
    corecore