38 research outputs found

    Dexterous Grasping Tasks Generated With an Add-on End Effector of a Haptic Feedback System

    Get PDF
    The simulation of grasping operations in virtual reality (VR) is required for many applications, especially in the domain of industrial product design, but it is very difficult to achieve without any haptic feedback. Force feedback on the fingers can be provided by a hand exoskeleton, but such a device is very complex, invasive, and costly. In this paper, we present a new device, called HaptiHand, which provides position and force input as well as haptic output for four fingers in a noninvasive way, and is mounted on a standard force-feedback arm. The device incorporates four independent modules, one for each finger, inside an ergonomic shape, allowing the user to generate a wide range of virtual hand configurations to grasp naturally an object. It is also possible to reconfigure the virtual finger positions when holding an object. The paper explains how the device is used to control a virtual hand in order to perform dexterous grasping operations. The structure of the HaptiHand is described through the major technical solutions required and tests of key functions serve as validation process for some key requirements. Also, an effective grasping task illustrates some capabilities of the HaptiHand

    An add-on device to perform dexterous grasping tasks with a haptic feedback system

    Get PDF
    Achieving grasping tasks in real time with haptic feedback may require the control of a large number of degrees of freedom (DOFs) to model hand and finger movements. This is mandatory to grasp objects with dexterity. Here, a new device called HaptiHand is proposed that can be added to a haptic feedback arm and provide the user with enough DOFs so that he/she can intuitively and dexterously grasp an object, modify the virtual hand configuration and number of fingers with respect to the object while manipulating the object. Furthermore, this device is non-invasive and enables the user to apply forces on the fingers of the virtual hand. The HaptiHand lies inside the user’s hand so that the user can apply and release pressure on it in a natural manner that is transferred to the virtual hand using metaphors. The focus is placed on the description of the technology and structure of the HaptiHand to justify the choices and explain the behavior of the HaptiHand during object grasping and releasing tasks. This is combined with a short description of the models used.iLab Inria-Haptio

    A virtual work space for both hands manipulation with coherency between kinesthetic and visual sensation

    Get PDF
    This paper describes the construction of a virtual work space for tasks performed by two handed manipulation. We intend to provide a virtual environment that encourages users to accomplish tasks as they usually act in a real environment. Our approach uses a three dimensional spatial interface device that allows the user to handle virtual objects by hand and be able to feel some physical properties such as contact, weight, etc. We investigated suitable conditions for constructing our virtual work space by simulating some basic assembly work, a face and fit task. We then selected the conditions under which the subjects felt most comfortable in performing this task and set up our virtual work space. Finally, we verified the possibility of performing more complex tasks in this virtual work space by providing simple virtual models and then let the subjects create new models by assembling these components. The subjects can naturally perform assembly operations and accomplish the task. Our evaluation shows that this virtual work space has the potential to be used for performing tasks that require two-handed manipulation or cooperation between both hands in a natural manner

    ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    Get PDF
    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation

    Sublimate: State-Changing Virtual and Physical Rendering to Augment Interaction with Shape Displays

    Get PDF
    Recent research in 3D user interfaces pushes towards immersive graphics and actuated shape displays. Our work explores the hybrid of these directions, and we introduce sublimation and deposition, as metaphors for the transitions between physical and virtual states. We discuss how digital models, handles and controls can be interacted with as virtual 3D graphics or dynamic physical shapes, and how user interfaces can rapidly and fluidly switch between those representations. To explore this space, we developed two systems that integrate actuated shape displays and augmented reality (AR) for co-located physical shapes and 3D graphics. Our spatial optical see-through display provides a single user with head-tracked stereoscopic augmentation, whereas our handheld devices enable multi-user interaction through video seethrough AR. We describe interaction techniques and applications that explore 3D interaction for these new modalities. We conclude by discussing the results from a user study that show how freehand interaction with physical shape displays and co-located graphics can outperform wand-based interaction with virtual 3D graphics.National Science Foundation (U.S.) (Graduate Research Fellowship Grant 1122374

    Mechanical design optimization for multi-finger haptic devices applied to virtual grasping manipulation

    Get PDF
    This paper describes the design of a modular multi-finger haptic device for virtual object manipulation. Mechanical structures are based on one module per finger and can be scaled up to three fingers. Mechanical configurations for two and three fingers are based on the use of one and two redundant axes, respectively. As demonstrated, redundant axes significantly increase workspace and prevent link collisions, which is their main asset with respect to other multi-finger haptic devices. The location of redundant axes and link dimensions have been optimized in order to guarantee a proper workspace, manipulability, force capability, and inertia for the device. The mechanical haptic device design and a thimble adaptable to different finger sizes have also been developed for virtual object manipulation

    Les retours tactile et kinesthésique améliorent la perception de distance en réalité virtuelle

    Get PDF
    National audienceResearch spanning psychology, neuroscience and HCI found that depth perception distortion is a common problem in virtual reality. This distortion results in depth compression, where users perceive objects closer than their intended distance. Studies suggested that cues, such as audio and haptic, help to solve this issue. We focus on haptic feedback and investigate how force feedback compares to tactile feedback within peripersonal space in reducing depth perception distortion. Our study (N=12) compares the use of haptic force feedback, vibration haptic feedback, a combination of both or no feedback. Our results show that both vibration and force feedback improve depth perception distortion over no feedback (8.3 times better distance estimation than with no haptic feedback vs. 1.4 to 1.5 times better with either vibration or force feedback on their own). Participants also subjectively preferred using force feedback, or a combination of force and vibration feedback, over no feedback.Des recherches en psychologie, neurosciences et IHM ont montré que la distorsion de la perception des distances est un problème courant en réalité virtuelle. Cette distorsion entraîne une compression des profondeurs, et les utilisateurs perçoivent des objets plus proches qu'ils ne le sont. Dans ce papier, nous nous concentrons sur le retour haptique et examinons comment le retour de force se compare au retour tactile pour réduire la compression des profondeurs. Notre étude (N = 12) compare l'utilisation du retour de force, le retour tactile vibratoire, la combinaison des deux ou l'absence de retour. Nos résultats montrent que le retour tactile et le retour de force améliorent la perception de la profondeur. L'estimation de distance est 8.3 fois meilleure que sans retour, par rapport à 1.4-1.5 fois avec retour tactile vibratoire ou de force non-combinés. Les participants ont également préféré utiliser le retour de force, ou une combinaison de force et tactile

    Novel haptic interface For viewing 3D images

    Get PDF
    In recent years there has been an explosion of devices and systems capable of displaying stereoscopic 3D images. While these systems provide an improved experience over traditional bidimensional displays they often fall short on user immersion. Usually these systems only improve depth perception by relying on the stereopsis phenomenon. We propose a system that improves the user experience and immersion by having a position dependent rendering of the scene and the ability to touch the scene. This system uses depth maps to represent the geometry of the scene. Depth maps can be easily obtained on the rendering process or can be derived from the binocular-stereo images by calculating their horizontal disparity. This geometry is then used as an input to be rendered in a 3D display, do the haptic rendering calculations and have a position depending render of the scene. The author presents two main contributions. First, since the haptic devices have a finite work space and limited resolution, we used what we call detail mapping algorithms. These algorithms compress geometry information contained in a depth map, by reducing the contrast among pixels, in such a way that it can be rendered into a limited resolution display medium without losing any detail. Second, the unique combination of a depth camera as a motion capturing system, a 3D display and haptic device to enhance user experience. While developing this system we put special attention on the cost and availability of the hardware. We decided to use only off-the-shelf, mass consumer oriented hardware so our experiments can be easily implemented and replicated. As an additional benefit the total cost of the hardware did not exceed the one thousand dollars mark making it affordable for many individuals and institutions

    Expanding the usable workspace of a haptic device by placing it on a moving base

    Get PDF
    The goal of this research is to expand the reachable workspace of a haptic device when used in a projection screen virtual environment. The proposed method includes supplementing the haptic device with a redundant degree of freedom to provide motion of the base. The key research challenge is to develop controls for the mobile base that will keep the haptic end-effector in the usable haptic workspace at all times. An experimental set up consisting of an Omni haptic device and a XY motorized table was used in the development of the control algorithms. Tests were conducted which demonstrate that the force felt by the user when touching a virtual wall remains constant even when the mobile base is moving to re-center the haptic device in the usable haptic workspace
    corecore