1,244 research outputs found

    Quality of experience study for multiple sensorial media delivery

    Get PDF
    Traditional video sequences make use of both visual images and audio tracks which are perceived by human eyes and ears, respectively. In order to present better ultra-reality virtual experience, the comprehensive human sensations (e.g. olfaction, haptic, gustatory, etc) needed to be exploited. In this paper, a multiple sensorial media (mulsemedia) delivery system is introduced to deliver multimedia sequences integrated with multiple media components which engage three or more of human senses such as sight, hearing, olfaction, haptic, gustatory, etc. Three sensorial effects (i.e. haptic, olfaction, and air-flowing) are selected for the purpose of demonstration. Subjective test is conducted to analyze the user perceived quality of experience of the mulsemedia service. It is concluded that the mulsemedia sequences can partly mask the decreased movie quality. Additionally the most preferable sensorial effect is haptic, followed by air-flowing and olfaction.This work was supported in part by Enterprise Ireland Innovation Partnership programme

    A Haptic Modeling System

    Get PDF
    Haptics has been studied as a means of providing users with natural and immersive haptic sensations in various real, augmented, and virtual environments, but it is still relatively unfamiliar to the general public. One reason is the lack of abundant haptic content in areas familiar to the general public. Even though some modeling tools do exist for creating haptic content, the addition of haptic data to graphic models is still relatively primitive, time consuming, and unintuitive. In order to establish a comprehensive and efficient haptic modeling system, this chapter first defines the haptic modeling processes and its scopes. It then proposes a haptic modeling system that can, based on depth images and image data structure, create and edit haptic content easily and intuitively for virtual object. This system can also efficiently handle non-uniform haptic property per pixel, and can effectively represent diverse haptic properties (stiffness, friction, etc)

    Toward "Pseudo-Haptic Avatars": Modifying the Visual Animation of Self-Avatar Can Simulate the Perception of Weight Lifting

    Get PDF
    International audienceIn this paper we study how the visual animation of a self-avatar can be artificially modified in real-time in order to generate different haptic perceptions. In our experimental setup, participants could watch their self-avatar in a virtual environment in mirror mode while performing a weight lifting task. Users could map their gestures on the self-animated avatar in real-time using a Kinect. We introduce three kinds of modification of the visual animation of the self-avatar according to the effort delivered by the virtual avatar: 1) changes on the spatial mapping between the user's gestures and the avatar, 2) different motion profiles of the animation, and 3) changes in the posture of the avatar (upper-body inclination). The experimental task consisted of a weight lifting task in which participants had to order four virtual dumbbells according to their virtual weight. The user had to lift each virtual dumbbells by means of a tangible stick, the animation of the avatar was modulated according to the virtual weight of the dumbbell. The results showed that the altering the spatial mapping delivered the best performance. Nevertheless, participants globally appreciated all the different visual effects. Our results pave the way to the exploitation of such novel techniques in various VR applications such as sport training, exercise games, or industrial training scenarios in single or collaborative mode

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    Human-centric quality management of immersive multimedia applications

    Get PDF
    Augmented Reality (AR) and Virtual Reality (VR) multimodal systems are the latest trend within the field of multimedia. As they emulate the senses by means of omni-directional visuals, 360 degrees sound, motion tracking and touch simulation, they are able to create a strong feeling of presence and interaction with the virtual environment. These experiences can be applied for virtual training (Industry 4.0), tele-surgery (healthcare) or remote learning (education). However, given the strong time and task sensitiveness of these applications, it is of great importance to sustain the end-user quality, i.e. the Quality-of-Experience (QoE), at all times. Lack of synchronization and quality degradation need to be reduced to a minimum to avoid feelings of cybersickness or loss of immersiveness and concentration. This means that there is a need to shift the quality management from system-centered performance metrics towards a more human, QoE-centered approach. However, this requires for novel techniques in the three areas of the QoE-management loop (monitoring, modelling and control). This position paper identifies open areas of research to fully enable human-centric driven management of immersive multimedia. To this extent, four main dimensions are put forward: (1) Task and well-being driven subjective assessment; (2) Real-time QoE modelling; (3) Accurate viewport prediction; (4) Machine Learning (ML)-based quality optimization and content recreation. This paper discusses the state-of-the-art, and provides with possible solutions to tackle the open challenges

    Too Hot to Handle: An Evaluation of the Effect of Thermal Visual Representation on User Grasping Interaction in Virtual Reality

    Get PDF
    Influence of interaction fidelity and rendering quality on perceived user experience have been largely explored in Virtual Reality (VR). However, differences in interaction choices triggered by these rendering cues have not yet been explored. We present a study analysing the effect of thermal visual cues and contextual information on 50 participants' approach to grasp and move a virtual mug. This study comprises 3 different temperature cues (baseline empty, hot and cold) and 4 contextual representations; all embedded in a VR scenario. We evaluate 2 different hand representations (abstract and human) to assess grasp metrics. Results show temperature cues influenced grasp location, with the mug handle being predominantly grasped with a smaller grasp aperture for the hot condition, while the body and top were preferred for baseline and cold conditions

    Drones, Augmented Reality and Virtual Reality Journalism: Mapping Their Role in Immersive News Content

    Get PDF
    Drones are shaping journalism in a variety of ways including in the production of immersive news content. This article identifies, describes and analyzes, or maps out, four areas in which drones are impacting immersive news content. These include: 1) enabling the possibility of providing aerial perspective for first-person perspective flight-based immersive journalism experiences; 2) providing geo-tagged audio and video for flight-based immersive news content; 3) providing the capacity for both volumetric and 360 video capture; and 4) generating novel content types or content based on data acquired from a broad range of sensors beyond the standard visible light captured via video cameras; these may be a central generator of unique experiential media content beyond visual flight-based news content

    New approaches for mixed reality in urban environments: the CINeSPACE project

    Get PDF
    The CINeSPACE (www.cinespace.eu) project allows tourists to access the rich cultural heritage of urban environments by literally morphing the user into the past through the use of multimedia archives. Tourists use the device which includes both a PDA type of device with a GIS interface displayed on a touch screen to help the user navigate and select multimedia content, and video binoculars to create the augmented reality effects. In addition to this mode of interaction, a survey of Mixed Reality user interaction paradigms will be presented. A key feature of Mixed Reality user interfaces is the object identification and annotation methods available to the user, of which a survey, including a review of the GeoConcepts ontology annotation methodology used in the CINeSPACE device, will be presented.Peer Reviewe

    New approaches for mixed reality in urban environments: the CINeSPACE project

    Get PDF
    The CINeSPACE (www.cinespace.eu) project allows tourists to access the rich cultural heritage of urban environments by literally morphing the user into the past through the use of multimedia archives. Tourists use the device which includes both a PDA type of device with a GIS interface displayed on a touch screen to help the user navigate and select multimedia content, and video binoculars to create the augmented reality effects. In addition to this mode of interaction, a survey of Mixed Reality user interaction paradigms will be presented. A key feature of Mixed Reality user interfaces is the object identification and annotation methods available to the user, of which a survey, including a review of the GeoConcepts ontology annotation methodology used in the CINeSPACE device, will be presented.Peer Reviewe
    • …
    corecore