15 research outputs found

    Integration process of visual and gravitoinertial cues for spatial orientation and sensorimotor control

    No full text
    Ce travail doctoral questionne le processus d’intĂ©gration des informations visuelles et gravito-inertielles Ă  l’origine de comportements perceptivo-moteurs. Pour cela, nous avons manipulĂ© l’orientation dans le plan sagittal d’une scĂšne visuelle, du corps ou du vecteur gravito-inertiel, grĂące Ă  la rotation de cette scĂšne ou du corps et par centrifugation. Nous avons mesurĂ© les consĂ©quences de ces modifications sur la capacitĂ© Ă  localiser le corps ou une cible par le biais d’un mouvement de pointage manuel. Au cours de 3 expĂ©rimentations, nous avons manipulĂ© un ensemble de facteurs associĂ©s au contexte de prĂ©sentation des stimulations visuelles et gravito-inertielles (e.g., dynamique de rotation : rapide vs. lente), Ă  la combinaison de ces stimulations (i.e., congruence vs. incongruence spatiale), au mode de rĂ©ponse spatiale (i.e., tĂąche de dĂ©tection de l’inclinaison, pointage discret ou continu) et aux caractĂ©ristiques individuelles (i.e., style perceptif). De façon gĂ©nĂ©rale, les Ă©tudes rĂ©alisĂ©es montrent que les rĂšgles de pondĂ©ration sensorielle dĂ©pendent de l’interaction entre ces diffĂ©rents facteurs. Nous avons pu ainsi dĂ©terminer 2 grands types d’effets sur la pondĂ©ration sensorielle : i) La non congruence spatiale entre les stimulations entraine une dominance relative des informations gravito-inertielles quelles que soient la tĂąche ou les caractĂ©ristiques de la scĂšne visuelle ; ii) Par contraste, lorsque ces stimulations sont congruentes, les rĂšgles de pondĂ©ration sensorielle dĂ©pendent de la tĂąche (i.e., perceptive vs. sensorimotrice).This dissertation investigates the integration process of visual and gravitoinertial cues at the origin of perceptual-motor skills. To that aim, we manipulated sagittal orientation of a visual scene, the body and the gravitoinertial vector by means of scene and body rotations, as well as centrifugation. Self-orientation perception and target localization were analyzed during these modifications. In 3 experiments we modulated several factors associated with i) the presentation of visual and gravitoinertial stimulations (e.g., rotation dynamics: fast vs. slow), ii) the combination of these stimulations (i.e., spatial congruence vs. non-congruence), iii) the task (i.e., self-tilt detection, continuous and discrete arm pointing movements), iv) individual characteristics (i.e., perceptive style). Overall, we show that sensory integration rules depend on these interacting factors. Two global effects were revealed on sensory weighting: i) spatial non-congruence between stimulations induces relative gravitoinertial dominance, whatever the task or visual scene properties; ii) by contrast, spatial congruence between stimulations could be associated to sensory weighting rules which are task dependent (i.e., perceptive vs. sensorimotor)

    Sensorimotor control and linear visuohaptic gain

    Get PDF
    International audienceOur direct interactions with the environment are performed through the sense of haptics. Touch and kinesthesia are used to extract objects properties’ as well as to control our motion relative to them. The effectiveness of this sensorimotor control is a key question in the field of Human-Computer Interaction to enhance user performance. This is notably the case when we use a touchpad to control a visual cursor on a separate screen (e.g., a laptop). The control of these graphical interfaces is configured through a mapping between the motor space (e.g., the touchpad) and the visual space (e.g., the screen) called Transfer Function (TF). When we use the touchpad to control the cursor on the screen, the motion is compounded of a preprogrammed phase performed at high speed, and a following homing phase, performed at low speed and based on visuohaptic feedbacks (Elliott et al. 2010). Some operating system TFs (e.g., Windows, OS X) are based on this principle with a visuomotor gain during the preprogrammed phase which is high while it is low during the homing phase to reduce the Movement Time (MT). Such TFs have been shown to enhance performance with this increasing visuomotor gain (Casiez et al. 2008; Casiez and Roussel 2011). However, the reasons of this improvement are not totally elucidated, notably because the prescribed gains are non-linear. Here we analyzed the kinematics of a pointing task with a linear velocity- based TFs to assess how we plan and control our movement based on vision and haptics (i.e., touch and kinesthesia involved in motion perception). We compared two non-linear increasing and decreasing TF with constant gain TFs

    Sensorimotor control and linear visuohaptic gain

    No full text
    International audienceOur direct interactions with the environment are performed through the sense of haptics. Touch and kinesthesia are used to extract objects properties’ as well as to control our motion relative to them. The effectiveness of this sensorimotor control is a key question in the field of Human-Computer Interaction to enhance user performance. This is notably the case when we use a touchpad to control a visual cursor on a separate screen (e.g., a laptop). The control of these graphical interfaces is configured through a mapping between the motor space (e.g., the touchpad) and the visual space (e.g., the screen) called Transfer Function (TF). When we use the touchpad to control the cursor on the screen, the motion is compounded of a preprogrammed phase performed at high speed, and a following homing phase, performed at low speed and based on visuohaptic feedbacks (Elliott et al. 2010). Some operating system TFs (e.g., Windows, OS X) are based on this principle with a visuomotor gain during the preprogrammed phase which is high while it is low during the homing phase to reduce the Movement Time (MT). Such TFs have been shown to enhance performance with this increasing visuomotor gain (Casiez et al. 2008; Casiez and Roussel 2011). However, the reasons of this improvement are not totally elucidated, notably because the prescribed gains are non-linear. Here we analyzed the kinematics of a pointing task with a linear velocity- based TFs to assess how we plan and control our movement based on vision and haptics (i.e., touch and kinesthesia involved in motion perception). We compared two non-linear increasing and decreasing TF with constant gain TFs

    Kinematic parameters observed in conditions B<sub>fwd</sub>S and B<sub>fwd</sub>S<sub>fwd</sub> (solid lines) and associated predicted data (black dotted line) by the gravity-centered model.

    No full text
    <p>Observed and predicted data for PA (<b>a,b</b>), rTPA (<b>c,d</b>), RT (<b>e,f</b>) and MD (<b>g,h</b>) were provided for both conditions (left panel: <b>B<sub>fwd</sub>S<sub>fwd</sub></b>; right panel: <b>B<sub>fwd</sub>S</b>). Vertical bars denote positive standard errors. The lines between conditions depict differences at a given angle. *: p<.05.</p

    Movement pattern relative to orientation (0 deg vs. tilted).

    No full text
    <p><b>a</b>) Typical normalized acceleration profile relative to MD as a function of orientation (mean of all conditions). Differences in planned comparisons between tilted vs. 0 deg orientations were provided on the right panel for rTPA (<b>b</b>), RT (<b>c</b>), and MD (<b>d</b>). Vertical bars denote positive standard errors. *: p<.05; ‡: p<.01; †: p<.001.</p

    Experimental conditions and procedure.

    No full text
    <p>Body and/or visual scene tilts are depicted for angles at which pointing movements were requested (i.e., 6, 12 and 18 deg) for each experimental condition (<b>S<sub>fwd</sub></b>, <b>S<sub>bwd</sub></b>, <b>B<sub>fwd</sub>S</b>, <b>B<sub>fwd</sub>S<sub>bwd</sub></b>, <b>B<sub>fwd</sub>S<sub>fwd</sub></b>). Pink lines correspond to the visual scene orientations and dotted lines to the longitudinal body orientations. We mentioned the angle of visual scene orientation relative to the longitudinal body orientation (i.e., in a body-centered reference frame) as ‘S/b’ and relative to vertical as ‘S/v’ (i.e., in a gravity-centered reference frame). Associated single and combined conditions relative to the body-centered (i.e., body) and gravity-centered (i.e., g) reference frame are provided under each experimental condition. The lower panel of the figure illustrates the sequence of events including the different pointing blocks required during a trial (i.e., from 0 to 18 deg of body and/or visual scene tilt relative to the observer).</p

    Theoretical gravitational torque at the centre of mass of the arm for each body tilt (0 to 18 deg) as a function of arm angular position relative to the shoulder horizon.

    No full text
    <p>Torque was provided from the arm starting position (mean arm position relative to the shoulder = -42 deg) to the final required arm position at eye level (mean arm position relative to the shoulder = 14 deg). Values correspond to an average subject of 70 kg with a 0.35 m upperarm, a 0.30 m forearm, a 0.20 m hand and eye-shoulder distance of 0.21 m.</p

    Final pointing position observed in the conditions B<sub>fwd</sub>S and B<sub>fwd</sub>S<sub>fwd</sub> (solid lines) and associated predicted data (black dotted line) by the gravity-centered model.

    No full text
    <p><b>a</b>) Combined condition <b>B<sub>fwd</sub>S<sub>fwd</sub></b> and predicted data by this model. <b>b</b>) Combined condition <b>B<sub>fwd</sub>S</b> and predicted data by this model. Vertical bars denote positive standard errors.</p

    Final pointing position observed in combined conditions (solid lines) and associated predicted data (black dotted line) by the body-centered model.

    No full text
    <p><b>a</b>) Combined condition <b>B<sub>fwd</sub>S<sub>fwd</sub></b> and predicted data by the unweighted sum. <b>b</b>) Combined condition <b>B<sub>fwd</sub>S<sub>bwd</sub></b> and predicted data by this unweighted sum. Vertical bars denote positive standard errors. *: p<.05; ‡: p<.01; †: p<.001.</p
    corecore