6,079 research outputs found

    Haptic Interface for Center of Workspace Interaction

    Get PDF
    We build upon a new interaction style for 3D interfaces, called the center of workspace interaction. This style of interaction is defined with respect to a central fixed point in 3D space, conceptually within arm\u27s length of the user. For demonstration, we show a haptically enabled fish tank VR that utilizes a set of interaction widgets to support rapid navigation within a large virtual space. The fish tank VR refers to the creation of a small but high quality virtual reality that combines a number of technologies, such as head-tracking and stereo glasses, to their mutual advantag

    Quantifying perception of nonlinear elastic tissue models using multidimensional scaling

    Get PDF
    Simplified soft tissue models used in surgical simulations cannot perfectly reproduce all material behaviors. In particular, many tissues exhibit the Poynting effect, which results in normal forces during shearing of tissue and is only observed in nonlinear elastic material models. In order to investigate and quantify the role of the Poynting effect on material discrimination, we performed a multidimensional scaling (MDS) study. Participants were presented with several pairs of shear and normal forces generated by a haptic device during interaction with virtual soft objects. Participants were asked to rate the similarity between the forces felt. The selection of the material parameters – and thus the magnitude of the shear\ud and normal forces – was based on a pre-study prior to the MDS experiment. It was observed that for nonlinear elastic tissue models exhibiting the Poynting effect, MDS analysis indicated that both shear and normal forces affect user perception

    Perceiving Mass in Mixed Reality through Pseudo-Haptic Rendering of Newton's Third Law

    Get PDF
    In mixed reality, real objects can be used to interact with virtual objects. However, unlike in the real world, real objects do not encounter any opposite reaction force when pushing against virtual objects. The lack of reaction force during manipulation prevents users from perceiving the mass of virtual objects. Although this could be addressed by equipping real objects with force-feedback devices, such a solution remains complex and impractical.In this work, we present a technique to produce an illusion of mass without any active force-feedback mechanism. This is achieved by simulating the effects of this reaction force in a purely visual way. A first study demonstrates that our technique indeed allows users to differentiate light virtual objects from heavy virtual objects. In addition, it shows that the illusion is immediately effective, with no prior training. In a second study, we measure the lowest mass difference (JND) that can be perceived with this technique. The effectiveness and ease of implementation of our solution provides an opportunity to enhance mixed reality interaction at no additional cost

    Haptic guidance improves the visuo-manual tracking of trajectories

    Get PDF
    BACKGROUND: Learning to perform new movements is usually achieved by following visual demonstrations. Haptic guidance by a force feedback device is a recent and original technology which provides additional proprioceptive cues during visuo-motor learning tasks. The effects of two types of haptic guidances-control in position (HGP) or in force (HGF)-on visuo-manual tracking ("following") of trajectories are still under debate. METHODOLOGY/PRINCIPALS FINDINGS: Three training techniques of haptic guidance (HGP, HGF or control condition, NHG, without haptic guidance) were evaluated in two experiments. Movements produced by adults were assessed in terms of shapes (dynamic time warping) and kinematics criteria (number of velocity peaks and mean velocity) before and after the training sessions. CONCLUSION/SIGNIFICANCE: These results show that the addition of haptic information, probably encoded in force coordinates, play a crucial role on the visuo-manual tracking of new trajectories

    Symmetric and asymmetric action integration during cooperative object manipulation in virtual environments

    Get PDF
    Cooperation between multiple users in a virtual environment (VE) can take place at one of three levels. These are defined as where users can perceive each other (Level 1), individually change the scene (Level 2), or simultaneously act on and manipulate the same object (Level 3). Despite representing the highest level of cooperation, multi-user object manipulation has rarely been studied. This paper describes a behavioral experiment in which the piano movers' problem (maneuvering a large object through a restricted space) was used to investigate object manipulation by pairs of participants in a VE. Participants' interactions with the object were integrated together either symmetrically or asymmetrically. The former only allowed the common component of participants' actions to take place, but the latter used the mean. Symmetric action integration was superior for sections of the task when both participants had to perform similar actions, but if participants had to move in different ways (e.g., one maneuvering themselves through a narrow opening while the other traveled down a wide corridor) then asymmetric integration was superior. With both forms of integration, the extent to which participants coordinated their actions was poor and this led to a substantial cooperation overhead (the reduction in performance caused by having to cooperate with another person)
    • 

    corecore