75 research outputs found

    Haptic feedback in the training of veterinary students

    Get PDF
    This paper reports on an initial study into the use of haptic (or touch) technology in the training of veterinary students. One major problem faced in veterinary education is that animals can be harmed by inexperienced students who are trying to learn the skills they need. The aim of the work described here is to provide haptic models to simulate internal examinations of horses so that students can learn the basic skills required on computer and then transfer to real animals with much less risk of doing them injury

    Prop-Based Haptic Interaction with Co-location and Immersion: an Automotive Application

    Get PDF
    Most research on 3D user interfaces aims at providing only a single sensory modality. One challenge is to integrate several sensory modalities into a seamless system while preserving each modality's immersion and performance factors. This paper concerns manipulation tasks and proposes a visuo-haptic system integrating immersive visualization, tactile force and tactile feedback with co-location. An industrial application is presented

    Teegi: Tangible EEG Interface

    Get PDF
    We introduce Teegi, a Tangible ElectroEncephaloGraphy (EEG) Interface that enables novice users to get to know more about something as complex as brain signals, in an easy, en- gaging and informative way. To this end, we have designed a new system based on a unique combination of spatial aug- mented reality, tangible interaction and real-time neurotech- nologies. With Teegi, a user can visualize and analyze his or her own brain activity in real-time, on a tangible character that can be easily manipulated, and with which it is possible to interact. An exploration study has shown that interacting with Teegi seems to be easy, motivating, reliable and infor- mative. Overall, this suggests that Teegi is a promising and relevant training and mediation tool for the general public.Comment: to appear in UIST-ACM User Interface Software and Technology Symposium, Oct 2014, Honolulu, United State

    A virtual work space for both hands manipulation with coherency between kinesthetic and visual sensation

    Get PDF
    This paper describes the construction of a virtual work space for tasks performed by two handed manipulation. We intend to provide a virtual environment that encourages users to accomplish tasks as they usually act in a real environment. Our approach uses a three dimensional spatial interface device that allows the user to handle virtual objects by hand and be able to feel some physical properties such as contact, weight, etc. We investigated suitable conditions for constructing our virtual work space by simulating some basic assembly work, a face and fit task. We then selected the conditions under which the subjects felt most comfortable in performing this task and set up our virtual work space. Finally, we verified the possibility of performing more complex tasks in this virtual work space by providing simple virtual models and then let the subjects create new models by assembling these components. The subjects can naturally perform assembly operations and accomplish the task. Our evaluation shows that this virtual work space has the potential to be used for performing tasks that require two-handed manipulation or cooperation between both hands in a natural manner

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Integrating images from a moveable tracked display of three-dimensional data

    Get PDF
    abstract: This paper describes a novel method for displaying data obtained by three-dimensional medical imaging, by which the position and orientation of a freely movable screen are optically tracked and used in real time to select the current slice from the data set for presentation. With this method, which we call a “freely moving in-situ medical image”, the screen and imaged data are registered to a common coordinate system in space external to the user, at adjustable scale, and are available for free exploration. The three-dimensional image data occupy empty space, as if an invisible patient is being sliced by the moving screen. A behavioral study using real computed tomography lung vessel data established the superiority of the in situ display over a control condition with the same free exploration, but displaying data on a fixed screen (ex situ), with respect to accuracy in the task of tracing along a vessel and reporting spatial relations between vessel structures. A “freely moving in-situ medical image” display appears from these measures to promote spatial navigation and understanding of medical data.The electronic version of this article is the complete one and can be found online at: http://cognitiveresearchjournal.springeropen.com/articles/10.1186/s41235-017-0069-

    The activation of modality in virtual objects assembly

    Get PDF
    International audienceManual assembly of virtual 3D objects is required in several application fields. We focus on tangible user interfaces which offer the opportunity to the user to perform virtual assemblies efficiently and easily. In each hand, the user manipulates a tracked prop, and the translations and rotations are directly mapped to the corresponding virtual object. However, with such interfaces, both hands are requisitioned, and the user cannot drop the props without changing the action or the expected results. We list and discuss the choice of four possible modalities to activate/deactivate the assembly modality: vocal modality, gestural modality, buttons, and foot pedals. We conclude that when using the foot pedals, the user's gesture is closer to the real-world behaviour

    Substitutional reality:using the physical environment to design virtual reality experiences

    Get PDF
    Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences

    Analyse de l'influence des systèmes de visualisation immersif sur l'assemblage virtuel de fragments en archéologie

    Get PDF
    4 pagesNational audienceDans ce papier, nous présentons une analyse de l'influence des systèmes de visualisation immersifs associés à une interface d'interaction multimodale pour la résolution d'une tâche complexe. L'utilisateur est chargé de résoudre des puzzles 3D à deux pièces en utilisant une interface à deux fois 6 degrés de liberté. Nous avons ainsi testé les performances des utilisateurs en utilisant trois modes d'affichage différents : un affichage monoscopique, un système de vision stéréoscopique et enfin un dispositif de suivi de position de la tête pour adapter le positionnement du point de vue à la posture physique associé à un affichage stéréoscopique. Notre comparaison des données est réalisée à partir du temps nécessaire aux utilisateurs pour réaliser la tâche d'assemblage. Malgré le faible nombre d'utilisateurs testés, cette étude montre que l'apprentissage avec le mode d'affichage stéréoscopique est plus important qu'avec les deux autres modes. On observe aussi qu'après cette phase d'apprentissage effectuée, les utilisateurs travaillant avec un affichage stéréoscopique sont globalement plus performants que ceux des autres groupes
    • …
    corecore