39 research outputs found

    A tutorial for olfaction-based multisensorial media application design and evaluation

    Get PDF
    © ACM, 2017. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL50, ISS5, September 2017} https://doi.org/10.1145/310824

    Using eye tracking and heart-rate activity to examine crossmodal correspondences QoE in Mulsemedia

    Get PDF
    Different senses provide us with information of various levels of precision and enable us to construct a more precise representation of the world. Rich multisensory simulations are thus beneficial for comprehension, memory reinforcement, or retention of information. Crossmodal mappings refer to the systematic associations often made between different sensory modalities (e.g., high pitch is matched with angular shapes) and govern multisensory processing. A great deal of research effort has been put into exploring cross-modal correspondences in the field of cognitive science. However, the possibilities they open in the digital world have been relatively unexplored. Multiple sensorial media (mulsemedia) provides a highly immersive experience to the users and enhances their Quality of Experience (QoE) in the digital world. Thus, we consider that studying the plasticity and the effects of cross-modal correspondences in a mulsemedia setup can bring interesting insights about improving the human computer dialogue and experience. In our experiments, we exposed users to videos with certain visual dimensions (brightness, color, and shape), and we investigated whether the pairing with a cross-modal matching sound (high and low pitch) and the corresponding auto-generated vibrotactile effects (produced by a haptic vest) lead to an enhanced QoE. For this, we captured the eye gaze and the heart rate of users while experiencing mulsemedia, and we asked them to fill in a set of questions targeting their enjoyment and perception at the end of the experiment. Results showed differences in eye-gaze patterns and heart rate between the experimental and the control group, indicating changes in participants’ engagement when videos were accompanied by matching cross-modal sounds (this effect was the strongest for the video displaying angular shapes and high-pitch audio) and transitively generated cross-modal vibrotactile effects.<?vsp -1pt?

    Leitura Imersiva: Impacto do Som e Estímulos Hápticos na Resposta Emocional do Leitor

    Get PDF
    Quando estamos a ler, o nosso estado de concentração aumenta e é capaz de nos transportar para o universo do livro, fazendo-nos desenvolver um sentimento de presença no mesmo. A realidade aumentada poderá melhorar a experiência de leitura, aumentando a vivência que a pessoa sente ao ler e possibilitando o aumento de presença. Esta pode trazer vantagens para os leitores na potenciação da experiência de leitura. A presente dissertação propõe-se a desenvolver uma aplicação de realidade aumentada multimodal de forma a potenciar a experiência de leitura. Esta dissertação tem por objetivo desenvolver uma aplicação de realidade aumentada multimodal de leitura para a melhoria da experiência de leitura. Para além da aplicação proposta, serão desenvolvidas três estórias que serão utilizadas num estudo com utilizadores de forma a investigar o impacto de diferentes configurações multissensoriais na qualidade da experiência e estados afetivos dos leitores. Para além das diferentes configurações multissensoriais, outra variável independente será o género dos leitores.When we are reading, our state of concentration increases and can transport us to the universe of the book, making us develop a feeling of presence in it. Augmented reality can improve the reading experience, increasing the experience that the person feels when reading and enabling increased presence. This can bring benefits to readers in enhancing the reading experience. This dissertation proposes to develop a multimodal augmented reality application to enhance the reading experience. This dissertation aims to develop a multimodal augmented reality reading application to improve the reading experience. In addition to the proposed application, three stories will be developed that will be used in a study with users to investigate the impact of different multisensory configurations on the quality of experience and affective states of the readers. In addition to the different multisensory configurations, another independent variable will be reader gender

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Mobile five senses augmented reality system: technology acceptance study

    Get PDF
    The application of the most recent technologies is fundamental to add value to tourism experiences, as well as in other economic sectors. Mobile Five Senses Augmented Reality (M5SAR) system is a mobile guide instrument for cultural, historical, and museum events. In order to realize the proclaimed five senses, the system has two main modules: a (i) mobile application which deals mainly with the senses of sight and hearing, using for that the mobile device camera to recognize and track on-the-fly (museum's) objects and give related information about them; and a (ii) portable device capable of enhancing the augmented reality (AR) experience to the full five senses through the stimulus of touch, taste, and smell, by associating itself to the users' smartphone or tablet. This paper briefly presents the system's architecture but, the main focus is on the analysis of the users' acceptance for this technology, namely the AR (software) application, and its integration with the (hardware) device to achieve the five senses AR. Results show that social influence, effort expectancy, and facilitating conditions are the key constructs that drive the users to accept and M5SAR's technology.Funding Agency Portuguese Foundation for Science and Technology (FCT), Project: Laboratory of Robotics and Engineering Systems, LARSyS UID/EEA/50009/2019 Portuguese Foundation for Science and Technology (FCT), Project: Arts and Communication Research Center, CIAC UID/Multi/04019/2019 Portuguese Foundation for Science and Technology (FCT), Project: Research Centre for Tourism, Sustainability and Well-Being, CinTurs UID/SOC/04020/2019 Portuguese Foundation for Science and Technology (FCT), Project: Center for Advanced Studies in Management and Economics, CEFAGE UID/ECO/04007/2019 Project M5SAR I&DT - CRESC ALGARVE2020 3322 PORTUGAL2020 European Union (EU) Instituto de Salud Carlos IIIinfo:eu-repo/semantics/publishedVersio

    Social touch in human–computer interaction

    Get PDF
    Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better research methodologies, developing basic social touch building blocks, and solving specific ICT challenges

    Digitizing the chemical senses: possibilities & pitfalls

    Get PDF
    Many people are understandably excited by the suggestion that the chemical senses can be digitized; be it to deliver ambient fragrances (e.g., in virtual reality or health-related applications), or else to transmit flavour experiences via the internet. However, to date, progress in this area has been surprisingly slow. Furthermore, the majority of the attempts at successful commercialization have failed, often in the face of consumer ambivalence over the perceived benefits/utility. In this review, with the focus squarely on the domain of Human-Computer Interaction (HCI), we summarize the state-of-the-art in the area. We highlight the key possibilities and pitfalls as far as stimulating the so-called ‘lower’ senses of taste, smell, and the trigeminal system are concerned. Ultimately, we suggest that mixed reality solutions are currently the most plausible as far as delivering (or rather modulating) flavour experiences digitally is concerned. The key problems with digital fragrance delivery are related to attention and attribution. People often fail to detect fragrances when they are concentrating on something else; And even when they detect that their chemical senses have been stimulated, there is always a danger that they attribute their experience (e.g., pleasure) to one of the other senses – this is what we call ‘the fundamental attribution error’. We conclude with an outlook on digitizing the chemical senses and summarize a set of open-ended questions that the HCI community has to address in future explorations of smell and taste as interaction modalities
    corecore