289 research outputs found

    Multimodal assessment of emotional responses by physiological monitoring: novel auditory and visual elicitation strategies in traditional and virtual reality environments

    Get PDF
    This doctoral thesis explores novel strategies to quantify emotions and listening effort through monitoring of physiological signals. Emotions are a complex aspect of the human experience, playing a crucial role in our survival and adaptation to the environment. The study of emotions fosters important applications, such as Human-Computer and Human-Robot interaction or clinical assessment and treatment of mental health conditions such as depression, anxiety, stress, chronic anger, and mood disorders. Listening effort is also an important area of study, as it provides insight into the listeners’ challenges that are usually not identified by traditional audiometric measures. The research is divided into three lines of work, each with a unique emphasis on the methods of emotion elicitation and the stimuli that are most effective in producing emotional responses, with a specific focus on auditory stimuli. The research fostered the creation of three experimental protocols, as well as the use of an available online protocol for studying emotional responses including monitoring of both peripheral and central physiological signals, such as skin conductance, respiration, pupil dilation, electrocardiogram, blood volume pulse, and electroencephalography. An emotional protocol was created for the study of listening effort using a speech-in-noise test designed to be short and not induce fatigue. The results revealed that the listening effort is a complex problem that cannot be studied with a univariate approach, thus necessitating the use of multiple physiological markers to study different physiological dimensions. Specifically, the findings demonstrate a strong association between the level of auditory exertion, the amount of attention and involvement directed towards stimuli that are readily comprehensible compared to those that demand greater exertion. Continuing with the auditory domain, peripheral physiological signals were studied in order to discriminate four emotions elicited in a subject who listened to music for 21 days, using a previously designed and publicly available protocol. Surprisingly, the processed physiological signals were able to clearly separate the four emotions at the physiological level, demonstrating that music, which is not typically studied extensively in the literature, can be an effective stimulus for eliciting emotions. Following these results, a flat-screen protocol was created to compare physiological responses to purely visual, purely auditory, and combined audiovisual emotional stimuli. The results show that auditory stimuli are more effective in separating emotions at the physiological level. The subjects were found to be much more attentive during the audio-only phase. In order to overcome the limitations of emotional protocols carried out in a laboratory environment, which may elicit fewer emotions due to being an unnatural setting for the subjects under study, a final emotional elicitation protocol was created using virtual reality. Scenes similar to reality were created to elicit four distinct emotions. At the physiological level, it was noted that this environment is more effective in eliciting emotions. To our knowledge, this is the first protocol specifically designed for virtual reality that elicits diverse emotions. Furthermore, even in terms of classification, the use of virtual reality has been shown to be superior to traditional flat-screen protocols, opening the doors to virtual reality for the study of conditions related to emotional control

    Melody Informatics: Computational Approaches to Understanding the Relationships Between Human Affective Reasoning and Music

    Get PDF
    Music is a powerful and complex medium that allows people to express their emotions, while enhancing focus and creativity. It is a universal medium that can elicit strong emotion in people, regardless of their gender, age or cultural background. Music is all around us, whether it is in the sound of raindrops, birds chirping, or a popular song played as we walk along an aisle in a supermarket. Music can also significantly help us regain focus while doing a number of different tasks. The relationship between music stimuli and humans has been of particular interest due to music's multifaceted effects on human brain and body. While music can have an anticonvulsant effect on people's bodily signals and act as a therapeutic stimulus, it can also have proconvulsant effects such as triggering epileptic seizures. It is also unclear what types of music can help to improve focus while doing other activities. Although studies have recognised the effects of music in human physiology, research has yet to systematically investigate the effects of different genres of music on human emotion, and how they correlate with their subjective and physiological responses. The research set out in this thesis takes a human-centric computational approach to understanding how human affective (emotional) reasoning is influenced by sensory input, particularly music. Several user studies are designed in order to collect human physiological data while they interact with different stimuli. Physiological signals considered are: electrodermal activity (EDA), blood volume pulse (BVP), skin temperature (ST), pupil dilation (PD), electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Several computational approaches, including traditional machine learning approaches with a combination of feature selection methods are proposed which can effectively identify patterns from small to medium scale physiological feature sets. A novel data visualisation approach called "Gingerbread Animation" is proposed, which allows physiological signals to be converted into images that are compatible with transfer learning methods. A novel stacked ensemble based deep learning model is also proposed to analyse large-scale physiological datasets. In the beginning of this research, two user studies were designed to collect physiological signals from people interacting with visual stimuli. The computational models showed high efficacy in detecting people's emotional reactions. The results provided motivation to design a third user study, where these visual stimuli were combined with music stimuli. The results from the study showed decline in recognition accuracy comparing to the previous study. These three studies also gave a key insight that people's physiological response provide a stronger indicator of their emotional state, compared with their verbal statements. Based on the outcomes of the first three user studies, three more user studies were carried out to look into people's physiological responses to music stimuli alone. Three different music genres were investigated: classical, instrumental and pop music. Results from the studies showed that human emotion has a strong correlation with different types of music, and these can be computationally identified using their physiological response. Findings from this research could provide motivation to create advanced wearable technologies such as smartwatches or smart headphones that could provide personalised music recommendation based on an individual's physiological state. The computational approaches can be used to distinguish music based on their positive or negative effect on human mental health. The work can enhance existing music therapy techniques and lead to improvements in various medical and affective computing research
    • …
    corecore