4 research outputs found

    Using eye tracking and heart-rate activity to examine crossmodal correspondences QoE in Mulsemedia

    Get PDF
    Different senses provide us with information of various levels of precision and enable us to construct a more precise representation of the world. Rich multisensory simulations are thus beneficial for comprehension, memory reinforcement, or retention of information. Crossmodal mappings refer to the systematic associations often made between different sensory modalities (e.g., high pitch is matched with angular shapes) and govern multisensory processing. A great deal of research effort has been put into exploring cross-modal correspondences in the field of cognitive science. However, the possibilities they open in the digital world have been relatively unexplored. Multiple sensorial media (mulsemedia) provides a highly immersive experience to the users and enhances their Quality of Experience (QoE) in the digital world. Thus, we consider that studying the plasticity and the effects of cross-modal correspondences in a mulsemedia setup can bring interesting insights about improving the human computer dialogue and experience. In our experiments, we exposed users to videos with certain visual dimensions (brightness, color, and shape), and we investigated whether the pairing with a cross-modal matching sound (high and low pitch) and the corresponding auto-generated vibrotactile effects (produced by a haptic vest) lead to an enhanced QoE. For this, we captured the eye gaze and the heart rate of users while experiencing mulsemedia, and we asked them to fill in a set of questions targeting their enjoyment and perception at the end of the experiment. Results showed differences in eye-gaze patterns and heart rate between the experimental and the control group, indicating changes in participants’ engagement when videos were accompanied by matching cross-modal sounds (this effect was the strongest for the video displaying angular shapes and high-pitch audio) and transitively generated cross-modal vibrotactile effects.<?vsp -1pt?

    QoE of cross-modally mapped Mulsemedia: an assessment using eye gaze and heart rate

    Get PDF
    A great deal of research effort has been put in exploring crossmodal correspondences in the field of cognitive science which refer to the systematic associations frequently made between different sensory modalities (e.g. high pitch is matched with angular shapes). However, the possibilities cross-modality opens in the digital world have been relatively unexplored. Therefore, we consider that studying the plasticity and the effects of crossmodal correspondences in a mulsemedia setup can bring novel insights about improving the human-computer dialogue and experience. Mulsemedia refers to the combination of three or more senses to create immersive experiences. In our experiments, users were shown six video clips associated with certain visual features based on color, brightness, and shape. We examined if the pairing with crossmodal matching sound and the corresponding auto-generated haptic effect, and smell would lead to an enhanced user QoE. For this, we used an eye-tracking device as well as a heart rate monitor wristband to capture users’ eye gaze and heart rate whilst they were experiencing mulsemedia. After each video clip, we asked the users to complete an on-screen questionnaire with a set of questions related to smell, sound and haptic effects targeting their enjoyment and perception of the experiment. Accordingly, the eye gaze and heart rate results showed significant influence of the cross-modally mapped multisensorial effects on the users’ QoE. Our results highlight that when the olfactory content is crossmodally congruent with the visual content, the visual attention of the users seems shifted towards the correspondent visual feature. Crosmodally matched media is also shown to result in an enhanced QoE compared to a video only condition

    USING VIRTUAL REALITY TO INVESTIGATE ‘PROTEAN’ ANTI-PREDATOR BEHAVIOUR

    Get PDF
    Prey animals have evolved a wide variety of behaviours to combat the threat of predation, many of which have received considerable empirical and theoretical attention and are generally well understood in terms of their function and mechanistic underpinning. However, one of the most commonly observed and taxonomically widespread antipredator behaviours of all has, remarkably, received almost no experimental investigation: so-called ‘protean’ behaviour. This is defined as ‘behaviour that is sufficiently unpredictable to prevent a predator anticipating in detail the future position or actions of its prey’. In this thesis, I have elucidated the mechanisms that allow protean behaviour to be an effective anti-predatory response. This was explored with two approaches. Firstly, through the novel and extremely timely use of virtual reality to allow human ‘predators’ to attack and chase virtual prey in three-dimensions from a first-person perspective, thereby bringing the realism that has been missing from previous studies on predator-prey dynamics. Secondly through the three-dimensional tracking of protean behaviour in a highly tractable model species, the painted lady butterfly (Vanessa cardui). I explored this phenomenon in multiple contexts. Firstly, I simulated individual protean prey and explored the effects of unpredictability in their movement rules with respect to targeting accuracy of human ‘predators’ in virtual reality. Next, I examined the concept of ‘protean insurance’ via digitised movements of the painted lady butterfly, exploring the qualities of this animals’ movement paths related to human targeting ability. I then explored how the dynamics of animal groupings affected protean movement. Specifically, I investigated how increasing movement path complexity interacted with the well-documented ‘confusion effect’. I explored this question using both an experimental study and a VR citizen science game disseminated to the general public via the video game digital distribution service ‘Steam’. Subsequently, I explored another phenomenon associated with groupings of prey items; the ‘oddity effect’, which describes the preferential targeting of phenotypically odd individuals by predators. Typically, this phenomenon is associated with oddity of colouration or size. In this case, I investigated whether oddity of protean movement patterns relative to other group members could induce a ‘behavioural oddity effect’. Finally, I used a specialised genetic algorithm (GA) that was driven by human performance with respect to targeting prey items. I investigated the emergent protean movement paths that resulted from sustained predation pressure from humans. Specifically, I examined the qualities of the most fit movement paths with respect to control evolutions that were not under the selection pressure of human performance (randomised evolution). In the course of this thesis, I have gained a deeper understanding of a near ubiquitous component of predator prey interactions that has until recently been the subject of little empirical study. These findings provide important insights into the understudied phenomenon of protean movement, which are directly applicable to predator –prey dynamics within a broad range of taxa
    corecore