15 research outputs found

    Embodied interaction using non-planar projections in immersive virtual reality

    Full text link

    Virtual zero gravity impact on internal gravity model

    Get PDF
    This project investigates the impact of a virtual zero gravity experience on the human gravity model. In the planned experiment, subjects are immersed with HMD and full body motion capture in a virtual world exhibiting either normal gravity or the apparent absence of gravity (i.e. body and objects floating in space). The study evaluates changes in the subjects' gravity model by observing changes on motor planning of actions dependent on gravity. Our goal is to demonstrate that a virtual reality exposure can induce some modifications to the humans internal gravity model, analogous to those resulting from real exposure (e.g. parabolic flights), even if users remain under normal gravity condition in reality

    Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality

    Get PDF
    Empirical research on the bodily self has shown that the body representation is malleable, and prone to manipulation when conflicting sensory stimuli are presented. Using Virtual Reality (VR) we assessed the effects of manipulating multisensory feedback (full body control and visuo-tactile congruence) and visual perspective (first and third person perspective) on the sense of embodying a virtual body that was exposed to a virtual threat. We also investigated how subjects behave when the possibility of alternating between first and third person perspective at will was presented. Our results support that illusory ownership of a virtual body can be achieved in both first and third person perspectives under congruent visuo-motor-tactile condition. However, subjective body ownership and reaction to threat were generally stronger for first person perspective and alternating condition than for third person perspective. This suggests that the possibility of alternating perspective is compatible with a strong sense of embodiment, which is meaningful for the design of new embodied VR experiences

    Pocket6 : A 6DoF Controller Based On A Simple Smartphone Application

    No full text
    We propose, implement and evaluate the use of a smartphone application for real-time six-degrees-of-freedom user input. We show that our app-based approach achieves high accuracy and goes head-to-head with expensive externally tracked controllers. The strength of our application is that it is simple to implement and is highly accessible --- requiring only an off-the-shelf smartphone, without any external trackers, markers, or wearables. Due to its inside-out tracking and its automatic remapping algorithm, users can comfortably perform subtle 3D inputs everywhere (world-scale), without any spatial or postural limitations. For example, they can interact while standing, sitting or while having their hands down by their sides. Finally, we also show its use in a wide range of applications for 2D and 3D object manipulation, thereby demonstrating its suitability for diverse real-world scenarios.publishe

    Spatially aware mobile interface for 3D visualization and interactive surgery planning

    No full text

    Design and Assessment of a Collaborative 3D Interaction Technique for Handheld Augmented Reality

    No full text

    Collaborative 3D Manipulation using Mobile Phones

    No full text
    We present a 3D user interface for collaborative manipulation of three-dimensional objects in virtual environments. It maps inertial sensors, touch screen and physical buttons of a mobile phone into well-known gestures to alter the position, rotation and scale of virtual objects. As these transformations require the control of multiple degrees of freedom (DOFs), collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed in a single shared screen, which is handy to aggregate multiple users in the same physical space

    Towards a Disambiguation Canvas

    No full text
    We present Disambiguation canvas, a technique for selection by progressive refinement using a mobile device and consisting of two steps. During the first, the user defines a subset of objects through the orientation sensors of the device and a volume-casting pointing technique. The subsequent step consists of the disambiguation of the desired target among the previously-defined subset of objects, and is accomplished using the mobile device touchscreen. By relying on the touchscreen for the last step the user can disambiguate among hundreds of objects at once, previous progressive refinement techniques do not scale as well as ours. Disambiguation canvas is mainly developed for easy, accurate and fast selection of small objects, or objects inside cluttered virtual environments. User tests show that our technique performs faster than ray-casting for targets with approximate to 0 : 53 degrees of angular size, and is also much more accurate for all the tested target sizes

    Strategies for effective extension services to guide the advancement of animal agriculture in developing countries

    Get PDF
    We present the disambiguation canvas, a technique developed for easy, accurate and fast selection of small objects and objects inside cluttered virtual environments. Disambiguation canvas rely on selection by progressive refinement, it uses a mobile device and consists of two steps. During the first, the user defines a subset of objects by means of the orientation sensors of the device and a volume casting pointing technique. The subsequent step consists of the disambiguation of the desired target among the previously defined subset of objects, and is accomplished using the mobile device touchscreen. By relying on the touchscreen for the last step, the user can disambiguate among hundreds of objects at once. User tests show that our technique performs faster than ray-casting for targets with approximately 0.53 degrees of angular size, and is also much more accurate for all the tested target sizes
    corecore