132 research outputs found

    Integration of multi-sensorial effects in synchronised immersive hybrid TV scenarios

    Full text link
    [EN] Traditionally, TV media content has exclusively involved 2D or 3D audiovisual streams consumed by using a simple TV device. However, in order to generate more immersive media consumption experiences, other new types of content (e.g., omnidirectional video), consumption devices (e.g., Head Mounted Displays or HMD) and solutions to stimulate other senses beyond the traditional ones of sight and hearing, can be used. Multi-sensorial media content (a.k.a. mulsemedia) facilitates additional sensory effects that stimulate other senses during the media consumption, with the aim of providing the consumers with a more immersive and realistic experience. They provide the users with a greater degree of realism and immersion, but can also provide greater social integration (e.g., people with AV deficiencies or attention span problems) and even contribute to creating better educational programs (e.g., for learning through the senses in educational content or scientific divulgation). Examples of sensory effects that can be used are olfactory effects (scents), tactile effects (e.g., vibration, wind or pressure effects), and ambient effects (e.g., temperature or lighting). In this paper, a solution for providing multi-sensorial and immersive hybrid (broadcast/broadband) TV content consumption experiences, including omnidirectional video and sensory effects, is presented. It has been designed, implemented, and subjectively evaluated (by 32 participants) in an end-to-end platform for hybrid content generation, delivery and synchronised consumption. The satisfactory results which were obtained regarding the perception of fine synchronisation between sensory effects and multimedia content, and regarding the users' perceived QoE, are summarised and discussed.This work was supported in part by the "Vicerrectorado de Investigacion de la Universitat Politecnica de Valencia'' under Project PAID-11-21 and Project PAID-12-21.Marfil, D.; Boronat, F.; González-Salinas, J.; Sapena Piera, A. (2022). Integration of multi-sensorial effects in synchronised immersive hybrid TV scenarios. IEEE Access. 10:79071-79089. https://doi.org/10.1109/ACCESS.2022.319417079071790891

    A Haptic Modeling System

    Get PDF
    Haptics has been studied as a means of providing users with natural and immersive haptic sensations in various real, augmented, and virtual environments, but it is still relatively unfamiliar to the general public. One reason is the lack of abundant haptic content in areas familiar to the general public. Even though some modeling tools do exist for creating haptic content, the addition of haptic data to graphic models is still relatively primitive, time consuming, and unintuitive. In order to establish a comprehensive and efficient haptic modeling system, this chapter first defines the haptic modeling processes and its scopes. It then proposes a haptic modeling system that can, based on depth images and image data structure, create and edit haptic content easily and intuitively for virtual object. This system can also efficiently handle non-uniform haptic property per pixel, and can effectively represent diverse haptic properties (stiffness, friction, etc)

    Is Multimedia Multisensorial? - A Review of Mulsemedia Systems

    Get PDF
    © 2018 Copyright held by the owner/author(s). Mulsemedia - multiple sensorial media - makes possible the inclusion of layered sensory stimulation and interaction through multiple sensory channels. e recent upsurge in technology and wearables provides mulsemedia researchers a vehicle for potentially boundless choice. However, in order to build systems that integrate various senses, there are still some issues that need to be addressed. is review deals with mulsemedia topics remained insu ciently explored by previous work, with a focus on multi-multi (multiple media - multiple senses) perspective, where multiple types of media engage multiple senses. Moreover, it addresses the evolution of previously identi ed challenges in this area and formulates new exploration directions.This article was funded by the European Union’s Horizon 2020 Research and Innovation program under Grant Agreement no. 688503

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Automatic Multimedia Creation Enriched with Dynamic Conceptual Data

    Get PDF
    There is a growing gap between the multimedia production and the context centric multimedia services. The main problem is the under-exploitation of the content creation design. The idea is to support dynamic content generation adapted to the user or display profile. Our work is an implementation of a web platform for automatic generation of multimedia presentations based on SMIL (Synchronized Multimedia Integration Language) standard. The system is able to produce rich media with dynamic multimedia content retrieved automatically from different content databases matching the semantic context. For this purpose, we extend the standard interpretation of SMIL tags in order to accomplish a semantic translation of multimedia objects in database queries. This permits services to take benefit of production process to create customized content enhanced with real time information fed from databases. The described system has been successfully deployed to create advanced context centric weather forecasts

    MulseOnto: a Reference Ontology to Support the Design of Mulsemedia Systems

    Get PDF
    Designing a mulsemedia|multiple sensorial media|system entails first and foremost comprehending what it is beyond the ordinary understanding that it engages users in digital multisensory experiences that stimulate other senses in addition to sight and hearing, such as smell, touch, and taste. A myriad of programs that comprise a software system, several output devices to deliver sensory effects, computer media, among others, dwell deep in the realm of mulsemedia systems, making it a complex task for newcomers to get acquainted with their concepts and terms. Although there have been many technological advances in this field, especially for multisensory devices, there is a shortage of work that tries to establish common ground in terms of formal and explicit representation of what mulsemedia systems encompass. This might be useful to avoid the design of feeble mulsemedia systems that can be barely reused owing to misconception. In this paper, we extend our previous work by proposing to establish a common conceptualization about mulsemedia systems through a domain reference ontology named MulseOnto to aid the design of them. We applied ontology verification and validation techniques to evaluate it, including assessment by humans and a data-driven approach whereby the outcome is three successful instantiations of MulseOnto for distinct cases, making evident its ability to accommodate heterogeneous mulsemedia scenarios

    A framework for the assembly and delivery of multimodal graphics in E-learning environments

    Get PDF
    In recent years educators and education institutions have embraced E-Learning environments as a method of delivering content to and communicating with their learners. Particular attention needs to be paid to the accessibility of the content that each educator provides. In relation to graphics, content providers are instructed to provide textual alternatives for each graphic using either the “alt” attribute or the “longdesc” attribute of the HTML IMG tag. This is not always suitable for graphical concepts inherent in technical topics due to the spatial nature of the information. As there is currently no suggested alternative to the use of textual descriptions in E-Learning environments, blind learners are at a significant disadvantage when attempting to learn Science, Technology, Engineering or Mathematical (STEM) subjects online. A new approach is required that will provide blind learners with the same learning capabilities enjoyed by their sighted peers in relation to graphics. Multimodal graphics combine the modalities of sound and touch in order to deliver graphical concepts to blind learners. Although they have proven successful, they can be time consuming to create and often require expertise in accessible graphic design. This thesis proposes an approach based on mainstream E-Learning techniques that can support non-experts in the assembly of multimodal graphics. The approach is known as the Multimodal Graphic Assembly and Delivery Framework (MGADF). It exploits a component based Service Oriented Architecture (SOA) to provide non experts with the ability to assemble multimodal graphics and integrate them into mainstream E-Learning environments. This thesis details the design of the system architecture, information architecture and methodologies of the MGADF. Proof of concept interfaces were implemented, based on the design, that clearly demonstrate the feasibility of the approach. The interfaces were used in an end-user evaluation that assessed the benefits of a component based approach for non-expert multimodal graphic producers

    QoE of cross-modally mapped Mulsemedia: an assessment using eye gaze and heart rate

    Get PDF
    A great deal of research effort has been put in exploring crossmodal correspondences in the field of cognitive science which refer to the systematic associations frequently made between different sensory modalities (e.g. high pitch is matched with angular shapes). However, the possibilities cross-modality opens in the digital world have been relatively unexplored. Therefore, we consider that studying the plasticity and the effects of crossmodal correspondences in a mulsemedia setup can bring novel insights about improving the human-computer dialogue and experience. Mulsemedia refers to the combination of three or more senses to create immersive experiences. In our experiments, users were shown six video clips associated with certain visual features based on color, brightness, and shape. We examined if the pairing with crossmodal matching sound and the corresponding auto-generated haptic effect, and smell would lead to an enhanced user QoE. For this, we used an eye-tracking device as well as a heart rate monitor wristband to capture users’ eye gaze and heart rate whilst they were experiencing mulsemedia. After each video clip, we asked the users to complete an on-screen questionnaire with a set of questions related to smell, sound and haptic effects targeting their enjoyment and perception of the experiment. Accordingly, the eye gaze and heart rate results showed significant influence of the cross-modally mapped multisensorial effects on the users’ QoE. Our results highlight that when the olfactory content is crossmodally congruent with the visual content, the visual attention of the users seems shifted towards the correspondent visual feature. Crosmodally matched media is also shown to result in an enhanced QoE compared to a video only condition
    corecore