317 research outputs found

    How do humans mediate with the external physical world? From perception to control of articulated objects

    Get PDF
    Many actions in our daily life involve operation with articulated tools. Despite the ubiquity of articulated objects in daily life, human ability in perceiving the properties and control of articulated objects has been merely studied. Articulated objects are composed of links and revolute or prismatic joints. Moving one part of the linkage results in the movement of the other ones. Reaching a position with the tip of a tool requires adapting the motor commands to the change of position of the endeffector different from the action of reaching the same position with the hand. The dynamic properties are complex and variant in the movement of articulated bodies. For instance, apparent mass, a quantity that measures the dynamic interaction of the articulated object, varies as a function of the changes in configuration. An actuated articulated system can generate a static, but position-dependent force field with constant torques about joints. There are evidences that internal models are involved in the perception and control of tools. In the present work, we aim to investigate several aspects of the perception and control of articulated objects and address two questions, The first question is how people perceive the kinematic and dynamic properties in the haptic interaction with articulated objects? And the second question is what effect has seeing the tool on the planning and execution of reaching movements with a complex tool? Does the visual representation of mechanism structures help in the reaching movement and how? To address these questions, 3D printed physical articulated objects and robotic systems have been designed and developed for the psychophysical studies. The present work involves three studies in different aspects of perception and control of articulated objects. We first did haptic size discrimination tasks using three different types of objects, namely, wooden boxes, actuated apparatus with two movable flat surfaces, and large-size pliers, in unimanual, bimanual grounded and bimanual free conditions. We found bimanual integration occurred in particular in the free manipulation of objects. The second study was on the visuo-motor reaching with complex tools. We found that seeing the mechanism of the tool, even briefly at the beginning of the trial, improved the reaching performance. The last study was about force perception, evidences showed that people could take use of the force field at the end-effector to induce the torque about the joints generated by the articulated system

    No need to touch this: Bimanual haptic slant adaptation does not require touch

    Get PDF
    In our daily life, we often interact with objects using both hands raising the question the question to what extent information between the hands is shared. It has, for instance, been shown that curvature adaptation aftereffects can transfer from the adapted hand to the non-adapted hand. However, this transfer only occurred for dynamic exploration, e.g. by moving a single finger over a surface, but not for static exploration when keeping static contact with the surface and combining the information from different parts of the hand. This raises the question to what extent adaptation to object shape is shared between the hands when both hands are used in static fashion simultaneously and the object shape estimates require information from both hands. Here we addressed this question in three experiments using a slant adaptation paradigm. In Experiment 1 we investigated whether an aftereffect of static bimanual adaptation occurs at all and whether it transfers to conditions in which one hand was moving. In Experiment 2 participants adapted either to a felt slanted surface or simply be holding their hands in mid-air at similar positions, to investigate to what extent the effects of static bimanual adaptation are posture-based rather than object based. Experiment 3 further explored the idea that bimanual adaptation is largely posture based. We found that bimanual adaptation using static touch did lead to aftereffects when using the same static exploration mode for testing. However, the aftereffect did not transfer to any exploration mode that included a dynamic component. Moreover, we found similar aftereffects both with and without a haptic surface. Thus, we conclude that static bimanual adaptation is of proprioceptive nature and does not occur at the level at which the object is represented

    Factors of Micromanipulation Accuracy and Learning

    No full text
    Micromanipulation refers to the manipulation under a microscope in order to perform delicate procedures. It is difficult for humans to manipulate objects accurately under a microscope due to tremor and imperfect perception, limiting performance. This project seeks to understand factors affecting accuracy in micromanipulation, and to propose strategies for learning improving accuracy. Psychomotor experiments were conducted using computer-controlled setups to determine how various feedback modalities and learning methods can influence micromanipulation performance. In a first experiment, static and motion accuracy of surgeons, medical students and non-medical students under different magniification levels and grip force settings were compared. A second experiment investigated whether the non-dominant hand placed close to the target can contribute to accurate pointing of the dominant hand. A third experiment tested a training strategy for micromanipulation using unstable dynamics to magnify motion error, a strategy shown to be decreasing deviation in large arm movements. Two virtual reality (VR) modules were then developed to train needle grasping and needle insertion tasks, two primitive tasks in a microsurgery suturing procedure. The modules provided the trainee with a visual display in stereoscopic view and information on their grip, tool position and angles. Using the VR module, a study examining effects of visual cues was conducted to train tool orientation. Results from these studies suggested that it is possible to learn and improve accuracy in micromanipulation using appropriate sensorimotor feedback and training

    Enhanced Robotic Surgical Training Using Augmented Visual Feedback

    Get PDF
    The goal of this study was to enhance robotic surgical training via real-time augmented visual feedback. Thirty novices (medical students) were divided into 5 feedback groups (speed, relative phase, grip force, video, and control) and trained during 1 session in 3 inanimate surgical tasks with the da Vinci Surgical System. Task completion time, distance traveled, speed, curvature, relative phase, and grip force were measured immediately before and after training and during a retention test 2 weeks after training. All performance measures except relative phase improved after training and were retained after 2 weeks. Feedback-specific effects showed that the speed group was faster than other groups after training, and the grip force group applied less grip force. This study showed that the real-time augmented feedback during training can enhance the surgical performance and can potentially be beneficial for both training and surgery

    Facteurs influencing haptic shape perception

    Full text link
    Le but de cette étude était de déterminer la contribution de plusieurs facteurs (le design de la tâche, l’orientation d’angle, la position de la tête et du regard) sur la capacité des sujets à percevoir les différences de formes bidimensionnelles (2-D) en utilisant le toucher haptique. Deux séries d'expériences (n = 12 chacune) ont été effectuées. Dans tous les cas, les angles ont été explorés avec l'index du bras tendu. La première expérience a démontré que le seuil de discrimination des angles 2-D a été nettement plus élevé, 7,4°, que le seuil de catégorisation des angles 2-D, 3,9°. Ce résultat étend les travaux précédents, en montrant que la différence est présente dans les mêmes sujets testés dans des conditions identiques (connaissance des résultats, conditions d'essai visuel, l’orientation d’angle). Les résultats ont également montré que l'angle de catégorisation ne varie pas en fonction de l'orientation des angles dans l'espace (oblique, verticale). Étant donné que les angles présentés étaient tous distribués autour de 90°, ce qui peut être un cas particulier comme dans la vision, cette constatation doit être étendue à différentes gammes d'angles. Le seuil plus élevé dans la tâche de discrimination reflète probablement une exigence cognitive accrue de cette tâche en demandant aux sujets de mémoriser temporairement une représentation mentale du premier angle exploré et de la comparer avec le deuxième angle exploré. La deuxième expérience représente la suite logique d’une expérience antérieure dans laquelle on a constaté que le seuil de catégorisation est modifié avec la direction du regard, mais pas avec la position de la tête quand les angles (non visibles) sont explorés en position excentrique, 60° à la droite de la ligne médiane. Cette expérience a testé l'hypothèse que l'augmentation du seuil, quand le regard est dirigé vers l'extrême droite, pourrait refléter une action de l'attention spatiale. Les sujets ont exploré les angles situés à droite de la ligne médiane, variant systématiquement la direction du regard (loin ou vers l’angle) de même que l'emplacement d'angle (30° et 60° vers la droite). Les seuils de catégorisation n’ont démontré aucun changement parmi les conditions testées, bien que le biais (point d'égalité subjective) ait été modifié (décalage aux valeurs inférieurs à 90°). Puisque notre test avec le regard fixé à l’extrême droite (loin) n'a eu aucun effet sur le seuil, nous proposons que le facteur clé contribuant à l'augmentation du seuil vu précédemment (tête tout droit/regard à droite) doit être cette combinaison particulière de la tête/regard/angles et non l’attention spatiale.The purpose was to determine the contribution of several factors (design of the task, angle orientation, head position and gaze) to the ability of subjects to perceive differences in twodimensional (2-D) shape using haptic touch. Two series of experiments (n=12 each) were carried out. In all cases the angles were explored with the index finger of the outstretched arm. The first experiment showed that the mean threshold for 2-D angle discrimination was significantly higher, 7.4°, than for 2-D angle categorization, 3.9°. This result extended previous work, by showing that the difference is present in the same subjects tested under identical conditions (knowledge of results, visual test conditions, angle orientation). The results also showed that angle categorization did not vary as a function of the orientation of the angles in space (oblique, upright). Given that the angles presented were all distributed around 90°, and that this may be a special case as in vision, this finding needs to be extended to different ranges of angles. The higher threshold with angle discrimination likely reflects the increased cognitive demands of this task which required subjects to temporarily store a mental representation of the first angle scanned, and to compare this to the second scanned angle. The second experiment followed up on observations that categorization thresholds are modified with gaze direction but not head position when the unseen angles are explored in an eccentric position, 60° to the right of midline. This experiment tested the hypothesis that the increased threshold when gaze was directed to the far right might reflect an action of spatial attention. Subjects explored angles located to the right of midline, systematically varying the direction of gaze (away from or to the angles) along with angle location (30° and 60° to the right). Categorization thresholds showed no change across the conditions tested, although bias (point of subjective equality) was changed (shift to lower angle values). Since our testing with far right gaze (away) had no effect on threshold, we suggest that the key factor contributing to the increased threshold seen previously (head forward/gaze right) must have been this particular combination of head/gaze/angles used and not spatial attention

    Master of Science

    Get PDF
    thesisHaptic interactions with smartphones are generally restricted to vibrotactile feedback that offers limited distinction between delivered tactile cues. The lateral movement of a small, high-friction contactor at the fingerpad can be used to induce skin stretch tangent to the skin's surface. This method has been demonstrated to reliably communicate four cardinal directions with 1 mm translations of the device's contactor, when finger motion is properly restrained. While earlier research has used a thimble to restrain the finger, this interface has been made portable by incorporating a simple conical hole as a finger restraint. An initial portable device design used RC hobby servos and the conical hole finger restraint, but the shape and size of this portable device wasn't compatible with smartphone form factors. This design also had significant compliance and backlash that must be compensated for with additional control schemes. In contrast, this thesis presents the design, fabrication, and testing of a low-profile skin-stretch display (LPSSD) with a novel actuation design for delivering complex tactile cues with minimal backlash or hysteresis of the skin contactor or "tactor." This flatter mechanism features embedded sensors for fingertip cursor control and selection. This device's nonlinear tactor motions are compensated for using table look-up and high-frequency open-loop control to create direction cues with 1.8 mm radial tactor displacements in 16 directions (distributed evenly every 22.5°) before returning to center. Two LPSSDs are incorporated into a smartphone peripheral and used in single-handed and bimanual tests to identify 16 directions. Users also participated in "relative" identification tests where they were first provided a reference direction cue in the forward/north direction followed by the cue direction that they were to identify. Tests were performed with the user's thumbs oriented in the forward direction and with thumbs angled inward slightly, similar to the angledthumb orientation console game controllers. Users are found to have increased performance with an angled-thumb orientation. They performed similarly when stimuli were delivered to their right or left thumbs, and had significantly better performance judging direction cues with both thumbs simultaneously. Participants also performed slightly better in identifying the relative direction cues than the absolute

    Bimanual Motor Strategies and Handedness Role During Human-Exoskeleton Haptic Interaction

    Full text link
    Bimanual object manipulation involves multiple visuo-haptic sensory feedbacks arising from the interaction with the environment that are managed from the central nervous system and consequently translated in motor commands. Kinematic strategies that occur during bimanual coupled tasks are still a scientific debate despite modern advances in haptics and robotics. Current technologies may have the potential to provide realistic scenarios involving the entire upper limb extremities during multi-joint movements but are not yet exploited to their full potential. The present study explores how hands dynamically interact when manipulating a shared object through the use of two impedance-controlled exoskeletons programmed to simulate bimanually coupled manipulation of virtual objects. We enrolled twenty-six participants (2 groups: right-handed and left-handed) who were requested to use both hands to grab simulated objects across the robot workspace and place them in specific locations. The virtual objects were rendered with different dynamic proprieties and textures influencing the manipulation strategies to complete the tasks. Results revealed that the roles of hands are related to the movement direction, the haptic features, and the handedness preference. Outcomes suggested that the haptic feedback affects bimanual strategies depending on the movement direction. However, left-handers show better control of the force applied between the two hands, probably due to environmental pressures for right-handed manipulations

    Integration of length and curvature in haptic perception

    Get PDF
    We investigated if and how length and curvature information are integrated when an object is explored in one hand. Subjects were asked to explore four types of objects between thumb and index finger. Objects differed in either length, curvature, both length and curvature correlated as in a circle, or anti-correlated. We found that when both length and curvature are present, performance is significantly better than when only one of the two cues is available. Therefore, we conclude that there is integration of length and curvature. Moreover, if the two cues are correlated in a circular cross-section instead of in an anti-correlated way, performance is better than predicted by a combination of two independent cues. We conclude that integration of curvature and length is highly efficient when the cues in the object are combined as in a circle, which is the most common combination of curvature and length in daily life

    The control of the reach-to-grasp movement

    Get PDF
    • …
    corecore