7,499 research outputs found

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    Self-modifiable color petri nets for modeling user manipulation and network event handling

    Get PDF
    A Self-Modifiable Color Petri Net (SMCPN) which has multimedia synchronization capability and the ability to model user manipulation and network event (i.e. network congestion, etc.) handling is proposed in this paper. In SMCPN, there are two types of tokens: resource tokens representing resources to be presented and color tokens with two sub-types: one associated with some commands to modify the net mechanism in operation, another associated with a number to decide iteration times. Also introduced is a new type of resource token named reverse token that moves to the opposite direction of arcs. When user manipulation/network event occurs, color tokens associated with the corresponding interrupt handling commands will be injected into places that contain resource tokens. These commands are then executed to handle the user manipulation/network event. SMCPN has the desired general programmability in the following sense: 1) It allows handling of user manipulations or pre-specified events at any time while keeping the Petri net design simple and easy. 2) It allows the user to customize event handling beforehand. This means the system being modeled can handle not only commonly seen user interrupts (e.g. skip, reverse, freeze), the user is free to define new operations including network event handling. 3) It has the power to simulate self-modifying protocols. A simulator has been built to demonstrate the feasibility of SMCPN

    Beyond multimedia adaptation: Quality of experience-aware multi-sensorial media delivery

    Get PDF
    Multiple sensorial media (mulsemedia) combines multiple media elements which engage three or more of human senses, and as most other media content, requires support for delivery over the existing networks. This paper proposes an adaptive mulsemedia framework (ADAMS) for delivering scalable video and sensorial data to users. Unlike existing two-dimensional joint source-channel adaptation solutions for video streaming, the ADAMS framework includes three joint adaptation dimensions: video source, sensorial source, and network optimization. Using an MPEG-7 description scheme, ADAMS recommends the integration of multiple sensorial effects (i.e., haptic, olfaction, air motion, etc.) as metadata into multimedia streams. ADAMS design includes both coarse- and fine-grained adaptation modules on the server side: mulsemedia flow adaptation and packet priority scheduling. Feedback from subjective quality evaluation and network conditions is used to develop the two modules. Subjective evaluation investigated users' enjoyment levels when exposed to mulsemedia and multimedia sequences, respectively and to study users' preference levels of some sensorial effects in the context of mulsemedia sequences with video components at different quality levels. Results of the subjective study inform guidelines for an adaptive strategy that selects the optimal combination for video segments and sensorial data for a given bandwidth constraint and user requirement. User perceptual tests show how ADAMS outperforms existing multimedia delivery solutions in terms of both user perceived quality and user enjoyment during adaptive streaming of various mulsemedia content. In doing so, it highlights the case for tailored, adaptive mulsemedia delivery over traditional multimedia adaptive transport mechanisms

    Ultra high definition video decoding with motion JPEG XR using the GPU

    Get PDF
    Many applications require real-time decoding of highresolution video pictures, for example, quick editing of video sequences in video editing applications. To increase decoding speed, parallelism can be exploited, yet, block-based image and video coding standards are difficult to decode in parallel because of the high number of dependencies between blocks. This paper investigates the parallel decoding capabilities of the new JPEG XR image coding standard for use on the massively-parallel architecture of the GPU. The potential of parallelism of the hierarchical frequency coding scheme used in the standard is addressed and a parallel decoding scheme is described suitable for real-time decoding of Ultra High Definition (4320p) Motion JPEG XR video sequences. Our results show a decoding speed of up to 46 frames per second for Ultra High Definition (4320p) sequences with high-dynamic range (32-bit/ 4: 2: 0) luma and chroma components
    corecore