9 research outputs found

    Audio masking effect on inter-component skews in olfaction-enhanced multimedia presentations

    Get PDF
    Media-rich content plays a vital role in consumer applications today, as these applications try to find new and interesting ways to engage their users. Video, audio, and the more traditional forms of media content continue to dominate with respect to the use of media content to enhance the user experience. Tactile interactivity has also now become widely popular in modern computing applications, while our olfactory and gustatory senses continue to have a limited role. However, in recent times, there have been significant advancements regarding the use of olfactory media content (i.e., smell), and there are a variety of devices now available to enable its computer-controlled emission. This paper explores the impact of the audio stream on user perception of olfactory-enhanced video content in the presence of skews between the olfactory and video media. This research uses the results from two experimental studies of user-perceived quality of olfactory-enhanced multimedia, where audio was present and absent, respectively. Specifically, the paper shows that the user Quality of Experience (QoE) is generally higher in the absence of audio for nearly perfect synchronized olfactory-enhanced multimedia presentations (i.e., an olfactory media skew of between {−10,+10s}); however, for greater olfactory media skews (ranging between {−30s;−10s} and {+10s, +30s}) user QoE is higher when the audio stream is present. It can be concluded that the presence of the audio has the ability to mask larger synchronization skews between the other media components in olfaction-enhanced multimedia presentations

    Multisensory games-based learning - lessons learnt from olfactory enhancement of a digital board game

    Get PDF
    Serious games are becoming an alternative educational method in a variety of fields because of their potential to improve the quality of learning experiences and to facilitate knowledge acquisition and content understanding. Moreover, entertainment-driven learners are more easily motivated to benefit from the learning process through meaningful activities defined in a game context. Interfacing educational computer games with multisensorial interfaces allows for a seamless integration between virtual and physical environments. Multisensorial cues can improve memory and attention and increase the cognitive and sensory-motor performance. Despite of the increasing knowledge in sensory processes, multisensory experiences and interactions in computer based instruction remain insufficiently explored and understood. In this paper, we present a multisensory educational game - Fragrance Channel - and we investigate how enabling olfaction can contribute to users' learning performance, engagement and quality of experience. We compare results obtained after experiencing Fragrance Channel in the presence and absence of olfactory feedback on both a mobile and a PC. A knowledge test administered before and immediately after showed that our proposed educational game led to an improvement of performance in all the explored conditions. Subjective measurements carried out after the olfactory experience showed that students enjoyed the scenario and appreciated it as being relevant.European Union’s Horizon 2020 Research and Innovation programm

    Using eye tracking and heart-rate activity to examine crossmodal correspondences QoE in Mulsemedia

    Get PDF
    Different senses provide us with information of various levels of precision and enable us to construct a more precise representation of the world. Rich multisensory simulations are thus beneficial for comprehension, memory reinforcement, or retention of information. Crossmodal mappings refer to the systematic associations often made between different sensory modalities (e.g., high pitch is matched with angular shapes) and govern multisensory processing. A great deal of research effort has been put into exploring cross-modal correspondences in the field of cognitive science. However, the possibilities they open in the digital world have been relatively unexplored. Multiple sensorial media (mulsemedia) provides a highly immersive experience to the users and enhances their Quality of Experience (QoE) in the digital world. Thus, we consider that studying the plasticity and the effects of cross-modal correspondences in a mulsemedia setup can bring interesting insights about improving the human computer dialogue and experience. In our experiments, we exposed users to videos with certain visual dimensions (brightness, color, and shape), and we investigated whether the pairing with a cross-modal matching sound (high and low pitch) and the corresponding auto-generated vibrotactile effects (produced by a haptic vest) lead to an enhanced QoE. For this, we captured the eye gaze and the heart rate of users while experiencing mulsemedia, and we asked them to fill in a set of questions targeting their enjoyment and perception at the end of the experiment. Results showed differences in eye-gaze patterns and heart rate between the experimental and the control group, indicating changes in participants’ engagement when videos were accompanied by matching cross-modal sounds (this effect was the strongest for the video displaying angular shapes and high-pitch audio) and transitively generated cross-modal vibrotactile effects.<?vsp -1pt?

    QoE of cross-modally mapped Mulsemedia: an assessment using eye gaze and heart rate

    Get PDF
    A great deal of research effort has been put in exploring crossmodal correspondences in the field of cognitive science which refer to the systematic associations frequently made between different sensory modalities (e.g. high pitch is matched with angular shapes). However, the possibilities cross-modality opens in the digital world have been relatively unexplored. Therefore, we consider that studying the plasticity and the effects of crossmodal correspondences in a mulsemedia setup can bring novel insights about improving the human-computer dialogue and experience. Mulsemedia refers to the combination of three or more senses to create immersive experiences. In our experiments, users were shown six video clips associated with certain visual features based on color, brightness, and shape. We examined if the pairing with crossmodal matching sound and the corresponding auto-generated haptic effect, and smell would lead to an enhanced user QoE. For this, we used an eye-tracking device as well as a heart rate monitor wristband to capture users’ eye gaze and heart rate whilst they were experiencing mulsemedia. After each video clip, we asked the users to complete an on-screen questionnaire with a set of questions related to smell, sound and haptic effects targeting their enjoyment and perception of the experiment. Accordingly, the eye gaze and heart rate results showed significant influence of the cross-modally mapped multisensorial effects on the users’ QoE. Our results highlight that when the olfactory content is crossmodally congruent with the visual content, the visual attention of the users seems shifted towards the correspondent visual feature. Crosmodally matched media is also shown to result in an enhanced QoE compared to a video only condition

    A tutorial for olfaction-based multisensorial media application design and evaluation

    Get PDF
    © ACM, 2017. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL50, ISS5, September 2017} https://doi.org/10.1145/310824

    Is Multimedia Multisensorial? - A Review of Mulsemedia Systems

    Get PDF
    © 2018 Copyright held by the owner/author(s). Mulsemedia - multiple sensorial media - makes possible the inclusion of layered sensory stimulation and interaction through multiple sensory channels. e recent upsurge in technology and wearables provides mulsemedia researchers a vehicle for potentially boundless choice. However, in order to build systems that integrate various senses, there are still some issues that need to be addressed. is review deals with mulsemedia topics remained insu ciently explored by previous work, with a focus on multi-multi (multiple media - multiple senses) perspective, where multiple types of media engage multiple senses. Moreover, it addresses the evolution of previously identi ed challenges in this area and formulates new exploration directions.This article was funded by the European Union’s Horizon 2020 Research and Innovation program under Grant Agreement no. 688503

    Virtual Synaesthesia: Crossmodal Correspondences and Synesthetic Experiences

    Get PDF
    As technology develops to allow for the integration of additional senses into interactive experiences, there is a need to bridge the divide between the real and the virtual in a manner that stimulates the five senses consistently and in harmony with the sensory expectations of the user. Applying the philosophy of a neurological condition known as synaesthesia and crossmodal correspondences, defined as the coupling of the senses, can provide numerous cognitive benefits and offers an insight into which senses are most likely to be ‘bound’ together. This thesis aims to present a design paradigm called ‘virtual synaesthesia’ the goal of the paradigm is to make multisensory experiences more human-orientated by considering how the brain combines senses in both the general population (crossmodal correspondences) and within a select few individuals (natural synaesthesia). Towards this aim, a literature review is conducted covering the related areas of research umbrellaed by the concept of ‘virtual synaesthesia’. Its research areas are natural synaesthesia, crossmodal correspondences, multisensory experiences, and sensory substitution/augmentation. This thesis examines augmenting interactive and multisensory experiences with strong (natural synaesthesia) and weak (crossmodal correspondences) synaesthesia. This thesis answers the following research questions: Is it possible to replicate the underlying cognitive benefits of odour-vision synaesthesia? Do people have consistent correspondences between olfaction and an aggregate of different sensory modalities? What is the nature and origin of these correspondences? And Is it possible to predict the crossmodal correspondences attributed to odours? The benefits of augmenting a human-machine interface using an artificial form of odour-vision synaesthesia are explored to answer these questions. This concept is exemplified by transforming odours transduced using a custom-made electronic nose and transforming an odour's ‘chemical footprint’ into a 2D abstract shape representing the current odour. Electronic noses can transform odours in the vapour phase generating a series of electrical signals that represent the current odour source. Weak synaesthesia (crossmodal correspondences) is then investigated to determine if people have consistent correspondences between odours and the angularity of shapes, the smoothness of texture, perceived pleasantness, pitch, musical, and emotional dimensions. Following on from this research, the nature and origin of these correspondences were explored using the underlying hedonic (values relating to pleasantness), semantic (knowledge of the identity of the odour) and physicochemical (the physical and chemical characteristics of the odour) dependencies. The final research chapter investigates the possibility of removing the bottleneck of conducting extensive human trials by determining what the crossmodal correspondences towards specific odours are by developing machine learning models to predict the crossmodal perception of odours using their underlying physicochemical features. The work presented in this thesis provides some insight and evidence of the benefits of incorporating the concept ‘virtual synaesthesia’ into human-machine interfaces and research into the methodology embodied by ‘virtual synaesthesia’, namely crossmodal correspondences. Overall, the work presented in this thesis shows potential for augmenting multisensory experiences with more refined capabilities leading to more enriched experiences, better designs, and a more intuitive way to convey information crossmodally
    corecore