65 research outputs found

    Analyzing EEG patterns in young adults exposed to different acrophobia levels: a VR study

    Get PDF
    IntroductionThe primary objective of this research is to examine acrophobia, a widely prevalent and highly severe phobia characterized by an overwhelming dread of heights, which has a substantial impact on a significant proportion of individuals worldwide. The objective of our study was to develop a real-time and precise instrument for evaluating levels of acrophobia by utilizing electroencephalogram (EEG) signals.MethodsEEG data was gathered from a sample of 18 individuals diagnosed with acrophobia. Subsequently, a range of classifiers, namely Support Vector Classifier (SVC), K-nearest Neighbors (KNN), Random Forest (RF), Decision Tree (DT), Adaboost, Linear Discriminant Analysis (LDA), Convolutional Neural Network (CNN), and Artificial Neural Network (ANN), were employed in the analysis. These methodologies encompass both machine learning (ML) and deep learning (DL) techniques.ResultsThe Convolutional Neural Network (CNN) and Artificial Neural Network (ANN) models demonstrated notable efficacy. The Convolutional Neural Network (CNN) model demonstrated a training accuracy of 96% and a testing accuracy of 99%, whereas the Artificial Neural Network (ANN) model attained a training accuracy of 96% and a testing accuracy of 97%. The findings of this study highlight the effectiveness of the proposed methodology in accurately categorizing real-time degrees of acrophobia using EEG data. Further investigation using correlation matrices for each level of acrophobia showed substantial EEG frequency band connections. Beta and Gamma mean values correlated strongly, suggesting cognitive arousal and acrophobic involvement could synchronize activity. Beta and Gamma activity correlated strongly with acrophobia, especially at higher levels.DiscussionThe results underscore the promise of this innovative approach as a dependable and sophisticated method for evaluating acrophobia. This methodology has the potential to make a substantial contribution toward the comprehension and assessment of acrophobia, hence facilitating the development of more individualized and efficacious therapeutic interventions

    Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing

    Full text link
    [EN] Emotions play a critical role in our daily lives, so the understanding and recognition of emotional responses is crucial for human research. Affective computing research has mostly used non-immersive two-dimensional (2D) images or videos to elicit emotional states. However, immersive virtual reality, which allows researchers to simulate environments in controlled laboratory conditions with high levels of sense of presence and interactivity, is becoming more popular in emotion research. Moreover, its synergy with implicit measurements and machine-learning techniques has the potential to impact transversely in many research areas, opening new opportunities for the scientific community. This paper presents a systematic review of the emotion recognition research undertaken with physiological and behavioural measures using head-mounted displays as elicitation devices. The results highlight the evolution of the field, give a clear perspective using aggregated analysis, reveal the current open issues and provide guidelines for future research.This research was funded by European Commission, grant number H2020-825585 HELIOS.Marín-Morales, J.; Llinares Millán, MDC.; Guixeres Provinciale, J.; Alcañiz Raya, ML. (2020). Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing. Sensors. 20(18):1-26. https://doi.org/10.3390/s20185163S126201

    Automatic emotion recognition in clinical scenario: a systematic review of methods

    Get PDF
    none4Automatic emotion recognition has powerful opportunities in the clinical field, but several critical aspects are still open, such as heterogeneity of methodologies or technologies tested mainly on healthy people. This systematic review aims to survey automatic emotion recognition systems applied in real clinical contexts, to deeply analyse clinical and technical aspects, how they were addressed, and relationships among them. The literature review was conducted on: IEEEXplore, ScienceDirect, Scopus, PubMed, ACM. Inclusion criteria were the presence of an automatic emotion recognition algorithm and the enrollment of at least 2 patients in the experimental protocol. The review process followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Moreover, the works were analysed according to a reference model to deeply examine both clinical and technical topics. 52 scientific papers passed inclusion criteria. Most clinical scenarios involved neurodevelopmental, neurological and psychiatric disorders with the aims of diagnosing, monitoring, or treating emotional symptoms. The most adopted signals are video and audio, while supervised shallow learning is mostly used for emotion recognition. A poor study design, tiny samples, and the absence of a control group emerged as methodological weaknesses. Heterogeneity of performance metrics, datasets and algorithms challenges results comparability, robustness, reliability and reproducibility.openPepa, Lucia; Spalazzi, Luca; Capecci, Marianna; Ceravolo, Maria GabriellaPepa, Lucia; Spalazzi, Luca; Capecci, Marianna; Ceravolo, Maria Gabriell

    Voice Analysis for Stress Detection and Application in Virtual Reality to Improve Public Speaking in Real-time: A Review

    Full text link
    Stress during public speaking is common and adversely affects performance and self-confidence. Extensive research has been carried out to develop various models to recognize emotional states. However, minimal research has been conducted to detect stress during public speaking in real time using voice analysis. In this context, the current review showed that the application of algorithms was not properly explored and helped identify the main obstacles in creating a suitable testing environment while accounting for current complexities and limitations. In this paper, we present our main idea and propose a stress detection computational algorithmic model that could be integrated into a Virtual Reality (VR) application to create an intelligent virtual audience for improving public speaking skills. The developed model, when integrated with VR, will be able to detect excessive stress in real time by analysing voice features correlated to physiological parameters indicative of stress and help users gradually control excessive stress and improve public speaking performanceComment: 41 pages, 7 figures, 4 table

    Measuring prefrontal cortex response to virtual reality exposure therapy in freely moving participants

    Get PDF
    Virtual Reality Exposure Therapy has demonstrated efficacy in the treatment of phobias; yet little is known about its underlying neural mechanisms. Neuroimaging studies have demonstrated that both traditional exposure therapy and virtual reality exposure therapy normalise brain activity within a prefrontal - amygdalar fear circuit after the treatment. However, the previous studies employed technologies that perhaps impact on ecological validity and naturalness of experience. Moreover, there are no studies investigating what is happening in the brain within a virtual reality session. This PhD takes a multidisciplinary approach and draws upon research areas of cognitive neuroscience, neuropsychology, and virtual reality. The approach is twofold - developmental and experimental. A key methodological objective was to maximise ecological validity by allowing freedom of movement and sight of one’s own body. This was approached by combining wearable fNIRS within Immersive Projection Technology (IPT). The stimulus was adapted from a classic VR experiment - Pit Room. The scope of this PhD includes three experiments. The first pilot experiment tested the potential of combining the wearable Functional Near-Infrared Spectroscopy (fNIRS) device – NIRSport, with virtual reality (VR) display - CAVE-like Immersive Projection Technology (IPT) system – Octave. The aim was to test the feasibility of the protocol in terms of the design, integration of technology, and signal to noise ratio in the Pit Room study, which involved measuring brain response during exposure to heights in virtual reality. The study demonstrated that brain activity could be measured in IPT without a significant signal interference. Although there was no significant change in brain activity during exposure to virtual heights, the study found trends toward increased HbO in the prefrontal cortex. The second study investigated the brain activity indicative of fear inhibition and cognitive reappraisal within a single session of VRET in healthy controls. The heart rate was also measured as an indicator of emotional arousal (fear response) during the VRET session. 27 healthy volunteers were exposed to heights in virtual reality. Changes in oxygenated haemoglobin concentration in the prefrontal cortex were measured in three blocks using a wireless fNIRS, and heart rate was measured using a wireless psychophysiological monitor. Results revealed increased HbO concentration in the DLPFC and MPFC during exposure to the fear-evoking VR, consistent with fear inhibition and cognitive reappraisal measured in previous neuroimaging studies that had not used VR. Within-session brain activity was measured at much higher temporal resolution than in previous studies. Consistent with previous studies, a trend showed an increase of brain activity in the DLPFC indicative of cognitive reappraisal at the beginning of the session. Then additionally the MPFC was activated consistent with fear inhibition. The heart rate showed a trend towards a gradual decrease within a session. The aim of the third study was to investigate the neural basis of VRET in an acrophobic population. In particular, the study focused on measuring functional brain activity associated with both within- and between-session learning. Psychophysiological monitoring was also employed to measure levels of emotional arousal within- and between sessions. 13 acrophobic volunteers took part in three-session VRET for a fear of heights. Changes in HbO in the prefrontal cortex were measured in three blocks to investigate within–session brain activity and across three sessions to investigate between-session inhibitory learning. Results demonstrated that phobic participants have decreased activity in the DLPFC and MPFC at the beginning, however, after three sessions of VRET, activity in these brain areas increased towards normal (measured in healthy controls). Although there was no within-session learning during the first and second session, the study found a significant increase in the DLPFC at the beginning of a session. During the second block, additionally, the MPFC was activated. The magnitude of brain activity in those regions was negatively correlated with the initial level of acrophobia. Due to the technical difficulties, no significant results were found in psychophysiological measures. However, subjective fear ratings decreased significantly within- and between sessions. Moreover, participants who felt more present demonstrated stronger results in brain activity at the end of VRET. This is the first project that investigated the neural correlates of fear inhibition and inhibitory learning by combining a VR display in which people can move around and see their body, with wearable neural imaging that gave a reasonable compromise between spatial and temporal resolution. This project has an application in widening access to immersive neuroimaging across understanding, diagnosis, assessment, and treatment of, a range of mental disorders such as phobia, anxiety or post-traumatic stress disorder. An application that is receiving an interest in the clinical community is repeatable, direct and quantifiable assessment within clinics, to diagnose, steer treatment and measure treatment outcome

    Brain-Computer Interfaces for Non-clinical (Home, Sports, Art, Entertainment, Education, Well-being) Applications

    Get PDF
    HCI researchers interest in BCI is increasing because the technology industry is expanding into application areas where efficiency is not the main goal of concern. Domestic or public space use of information and communication technology raise awareness of the importance of affect, comfort, family, community, or playfulness, rather than efficiency. Therefore, in addition to non-clinical BCI applications that require efficiency and precision, this Research Topic also addresses the use of BCI for various types of domestic, entertainment, educational, sports, and well-being applications. These applications can relate to an individual user as well as to multiple cooperating or competing users. We also see a renewed interest of artists to make use of such devices to design interactive art installations that know about the brain activity of an individual user or the collective brain activity of a group of users, for example, an audience. Hence, this Research Topic also addresses how BCI technology influences artistic creation and practice, and the use of BCI technology to manipulate and control sound, video, and virtual and augmented reality (VR/AR)

    Automatic cybersickness detection by deep learning of augmented physiological data from off-the-shelf consumer-grade sensors

    Get PDF
    Cybersickness is still a prominent risk factor potentially affecting the usability of virtual reality applications. Automated real-time detection of cybersickness promises to support a better general understanding of the phenomena and to avoid and counteract its occurrence. It could be used to facilitate application optimization, that is, to systematically link potential causes (technical development and conceptual design decisions) to cybersickness in closed-loop user-centered development cycles. In addition, it could be used to monitor, warn, and hence safeguard users against any onset of cybersickness during a virtual reality exposure, especially in healthcare applications. This article presents a novel real-time-capable cybersickness detection method by deep learning of augmented physiological data. In contrast to related preliminary work, we are exploring a unique combination of mid-immersion ground truth elicitation, an unobtrusive wireless setup, and moderate training performance requirements. We developed a proof-of-concept prototype to compare (combinations of) convolutional neural networks, long short-term memory, and support vector machines with respect to detection performance. We demonstrate that the use of a conditional generative adversarial network-based data augmentation technique increases detection performance significantly and showcase the feasibility of real-time cybersickness detection in a genuine application example. Finally, a comprehensive performance analysis demonstrates that a four-layered bidirectional long short-term memory network with the developed data augmentation delivers superior performance (91.1% F1-score) for real-time cybersickness detection. To encourage replicability and reuse in future cybersickness studies, we released the code and the dataset as publicly available

    Multimodal assessment of emotional responses by physiological monitoring: novel auditory and visual elicitation strategies in traditional and virtual reality environments

    Get PDF
    This doctoral thesis explores novel strategies to quantify emotions and listening effort through monitoring of physiological signals. Emotions are a complex aspect of the human experience, playing a crucial role in our survival and adaptation to the environment. The study of emotions fosters important applications, such as Human-Computer and Human-Robot interaction or clinical assessment and treatment of mental health conditions such as depression, anxiety, stress, chronic anger, and mood disorders. Listening effort is also an important area of study, as it provides insight into the listeners’ challenges that are usually not identified by traditional audiometric measures. The research is divided into three lines of work, each with a unique emphasis on the methods of emotion elicitation and the stimuli that are most effective in producing emotional responses, with a specific focus on auditory stimuli. The research fostered the creation of three experimental protocols, as well as the use of an available online protocol for studying emotional responses including monitoring of both peripheral and central physiological signals, such as skin conductance, respiration, pupil dilation, electrocardiogram, blood volume pulse, and electroencephalography. An emotional protocol was created for the study of listening effort using a speech-in-noise test designed to be short and not induce fatigue. The results revealed that the listening effort is a complex problem that cannot be studied with a univariate approach, thus necessitating the use of multiple physiological markers to study different physiological dimensions. Specifically, the findings demonstrate a strong association between the level of auditory exertion, the amount of attention and involvement directed towards stimuli that are readily comprehensible compared to those that demand greater exertion. Continuing with the auditory domain, peripheral physiological signals were studied in order to discriminate four emotions elicited in a subject who listened to music for 21 days, using a previously designed and publicly available protocol. Surprisingly, the processed physiological signals were able to clearly separate the four emotions at the physiological level, demonstrating that music, which is not typically studied extensively in the literature, can be an effective stimulus for eliciting emotions. Following these results, a flat-screen protocol was created to compare physiological responses to purely visual, purely auditory, and combined audiovisual emotional stimuli. The results show that auditory stimuli are more effective in separating emotions at the physiological level. The subjects were found to be much more attentive during the audio-only phase. In order to overcome the limitations of emotional protocols carried out in a laboratory environment, which may elicit fewer emotions due to being an unnatural setting for the subjects under study, a final emotional elicitation protocol was created using virtual reality. Scenes similar to reality were created to elicit four distinct emotions. At the physiological level, it was noted that this environment is more effective in eliciting emotions. To our knowledge, this is the first protocol specifically designed for virtual reality that elicits diverse emotions. Furthermore, even in terms of classification, the use of virtual reality has been shown to be superior to traditional flat-screen protocols, opening the doors to virtual reality for the study of conditions related to emotional control
    • …
    corecore