29 research outputs found

    Mulsemedia: State of the art, perspectives, and challenges

    Get PDF
    Mulsemedia-multiple sensorial media-captures a wide variety of research efforts and applications. This article presents a historic perspective on mulsemedia work and reviews current developments in the area. These take place across the traditional multimedia spectrum-from virtual reality applications to computer games-as well as efforts in the arts, gastronomy, and therapy, to mention a few. We also describe standardization efforts, via the MPEG-V standard, and identify future developments and exciting challenges the community needs to overcome

    Mulsemedia in Special Education: A Novel Teaching Approach for the Next Generation

    Get PDF
    Technology-enhanced learning settings are changing quickly and complexly in the contemporary digital era, making it possible for students with disabilities to learn more effectively than before. The words "multisensory" and "media" together, however, suggest that this strategy entails incorporating several sensory modalities in educational media to improve learning experiences for children with disabilities. It can entail integrating visual, aural, tactile, and kinesthetic elements to meet various learning requirements and styles. This article examines how Mulsemedia, one of these cutting-edge technologies, enhances learning methodologies, improving the teaching and learning of unique pedagogies and emphasizing teaching and learning-related modules for students with special needs. The researchers used a qualitative research design to understand and review the ten papers with secondary data based connected to mulsemedia in special education: A novel teaching approach for the next generation. The article also describes the extremely encouraging outcomes of case studies conducted with engineering students with disabilities in several schools in India. This critical article finds that multimedia-enhanced instruction significantly improves special students' learning experiences and ability to learn new information for future development

    Using eye tracking and heart-rate activity to examine crossmodal correspondences QoE in Mulsemedia

    Get PDF
    Different senses provide us with information of various levels of precision and enable us to construct a more precise representation of the world. Rich multisensory simulations are thus beneficial for comprehension, memory reinforcement, or retention of information. Crossmodal mappings refer to the systematic associations often made between different sensory modalities (e.g., high pitch is matched with angular shapes) and govern multisensory processing. A great deal of research effort has been put into exploring cross-modal correspondences in the field of cognitive science. However, the possibilities they open in the digital world have been relatively unexplored. Multiple sensorial media (mulsemedia) provides a highly immersive experience to the users and enhances their Quality of Experience (QoE) in the digital world. Thus, we consider that studying the plasticity and the effects of cross-modal correspondences in a mulsemedia setup can bring interesting insights about improving the human computer dialogue and experience. In our experiments, we exposed users to videos with certain visual dimensions (brightness, color, and shape), and we investigated whether the pairing with a cross-modal matching sound (high and low pitch) and the corresponding auto-generated vibrotactile effects (produced by a haptic vest) lead to an enhanced QoE. For this, we captured the eye gaze and the heart rate of users while experiencing mulsemedia, and we asked them to fill in a set of questions targeting their enjoyment and perception at the end of the experiment. Results showed differences in eye-gaze patterns and heart rate between the experimental and the control group, indicating changes in participants’ engagement when videos were accompanied by matching cross-modal sounds (this effect was the strongest for the video displaying angular shapes and high-pitch audio) and transitively generated cross-modal vibrotactile effects.<?vsp -1pt?

    QoE of cross-modally mapped Mulsemedia: an assessment using eye gaze and heart rate

    Get PDF
    A great deal of research effort has been put in exploring crossmodal correspondences in the field of cognitive science which refer to the systematic associations frequently made between different sensory modalities (e.g. high pitch is matched with angular shapes). However, the possibilities cross-modality opens in the digital world have been relatively unexplored. Therefore, we consider that studying the plasticity and the effects of crossmodal correspondences in a mulsemedia setup can bring novel insights about improving the human-computer dialogue and experience. Mulsemedia refers to the combination of three or more senses to create immersive experiences. In our experiments, users were shown six video clips associated with certain visual features based on color, brightness, and shape. We examined if the pairing with crossmodal matching sound and the corresponding auto-generated haptic effect, and smell would lead to an enhanced user QoE. For this, we used an eye-tracking device as well as a heart rate monitor wristband to capture users’ eye gaze and heart rate whilst they were experiencing mulsemedia. After each video clip, we asked the users to complete an on-screen questionnaire with a set of questions related to smell, sound and haptic effects targeting their enjoyment and perception of the experiment. Accordingly, the eye gaze and heart rate results showed significant influence of the cross-modally mapped multisensorial effects on the users’ QoE. Our results highlight that when the olfactory content is crossmodally congruent with the visual content, the visual attention of the users seems shifted towards the correspondent visual feature. Crosmodally matched media is also shown to result in an enhanced QoE compared to a video only condition

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Using Natural Language Processing and Artificial Intelligence to Explore the Nutrition and Sustainability of Recipes and Food

    Get PDF
    Copyright © 2021 van Erp, Reynolds, Maynard, Starke, Ibáñez Martín, Andres, Leite, Alvarez de Toledo, Schmidt Rivera, Trattner, Brewer, Adriano Martins, Kluczkovski, Frankowska, Bridle, Levy, Rauber, Tereza da Silva and Bosma. In this paper, we discuss the use of natural language processing and artificial intelligence to analyze nutritional and sustainability aspects of recipes and food. We present the state-of-the-art and some use cases, followed by a discussion of challenges. Our perspective on addressing these is that while they typically have a technical nature, they nevertheless require an interdisciplinary approach combining natural language processing and artificial intelligence with expert domain knowledge to create practical tools and comprehensive analysis for the food domain.Research Councils UK, the University of Manchester, the University of Sheffield, the STFC Food Network+ and the HEFCE Catalyst-funded N8 AgriFood Resilience Programme with matched funding from the N8 group of Universities; AHRC funded AHRC US-UK Food Digital Scholarship Network (Grant Reference: AH/S012591/1), STFC GCRF funded project “Trends in greenhouse gas emissions from Brazilian foods using GGDOT” (ST/S003320/1), the STFC funded project “Piloting Zooniverse for food, health and sustainability citizen science” (ST/T001410/1), and the STFC Food Network+ Awarded Scoping Project “Piloting Zooniverse to help us understand citizen food perceptions”; ESRC via the University of Sheffield Social Sciences Partnerships, Impact and Knowledge Exchange fund for “Recipe environmental impact calculator”; and through Research England via the University of Sheffield QR Strategic Priorities Fund projects “Cooking as part of a Sustainable Food System – creating an wider evidence base for policy makers”, and “Food based citizen science in the UK as a policy tool”; N8 AgriFood-funded project “Greenhouse Gas and Dietary choices Open-source Toolkit (GGDOT) hacknights.’; Brunel University internal Research England GCRF QR Fund; The University of Manchester GCRF QR Visiting Researcher Fellowship; National Institute of Informatics, Japan
    corecore