27 research outputs found

    Supplemental Material - Enabling four-arm laparoscopic surgery by controlling two robotic assistants via haptic foot interfaces

    No full text
    Supplemental Material for Enabling four-arm laparoscopic surgery by controlling two robotic assistants via haptic foot interfaces by Jacob Hernandez Sanchez, Walid Amanhoud, Aude Billard, and Mohamed Bouri in The International Journal of Robotics Research.</p

    Lateral exploration as a function of developmental age.

    No full text
    <p>For each child, the results of the 4 protocol items are displayed separately.</p

    Analysis of gaze directed toward faces.

    No full text
    <p><i>in FoV</i>: Percentage of time a face was in the broad field of view. <i>in CV</i>: Percentage of time a face was in central vision.</p

    Patterns of movements.

    No full text
    Modes of oscillations comprise random motions of the avatar’s hand. Three small oscillations (one to the left, center, right of the torso with amplitude of 0.3) and one large oscillation (amplitude of 0.7). Number of oscillations in each mode and transition to the next mode are random. The symmetric reachable range of the hand is scaled to [-1, +1], and it into the avatar’s coordinates.</p

    KUKA based 3D Reaching

    No full text
    Software package that enables the control of a seven-degree-of-freedom robotic arm (Intelligent Industrial Work Assistant, IIWA - KUKA, Augsburg, Germany) to position objects in a large three-dimensional work space and safely interact with primates. The software package is configured as a finite state machine. Specifically, the robot moves the end effector to a set of pre-determined positions in space using standard impedance joint control strategy. When the position is reached, the finite state machine switches to mass-spring damper behaviour. Stiffness and damping parameters are entirely definable by the user and can be easily modified. The three-dimensional pulling force is continuously monitored and recorded. When the end-effector is pulled across a predetermined virtual border, the robot returns to the starting position where it waits for the next target position.   </p

    Eye-Tracking process.

    No full text
    <p>1<i><sup>st</sup></i> column: the location of the eyes in the image is extracted automatically during post-hoc calibration. 2<i><sup>nd</sup></i> column: the direction of gaze is computed automatically from the eyes image through support vector regression. 3<i><sup>rd</sup></i> column: to highlight the direction of central vision (indicated by a crosshair), the image is blurred except for an area of 10 degrees radius around the center of the gaze. 4<i><sup>th</sup></i> & 5<i><sup>th</sup></i> columns Gaze tracking example while looking downwards: the system uses the whole eye region (shading of the eyelids, shape of the eyelashes, etc) to compute the gaze direction.</p

    The protocol used for the experiment.

    No full text
    <p>Subjects were divided into two groups and participated in the experiment with a different ordering of conditions followed by a short questionnaire.</p

    The simulated iCub robot.

    No full text
    <p>The robot is acting as the leader in the mirror game, generating random sinusoidal trajectories. (Left) the gaze is fixated on the hand. (Right) the gaze precede the hand. The blue arrows shows the next hand movement and the green arrows show the current gaze fixation point.</p

    Cross-wavelet analysis.

    No full text
    Right: Cross-wavelet coherence between the leader and the follower in one of the trials. Power of frequency components at each time is color coded; i.e., blue/yellow for weak/strong components, respectively. Moreover, the arrows indicate the leader-follower phase relation for each frequency over time. Left: Average phase-lag for each frequency extracted from the main plot.</p
    corecore