597 research outputs found

    Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses

    Full text link
    Tesis por compendio[ES] El uso de la realidad virtual (RV) se ha incrementado notablemente en la comunidad científica para la investigación del comportamiento humano. En particular, la RV inmersiva ha crecido debido a la democratización de las gafas de realidad virtual o head mounted displays (HMD), que ofrecen un alto rendimiento con una inversión económica. Uno de los campos que ha emergido con fuerza en la última década es el Affective Computing, que combina psicofisiología, informática, ingeniería biomédica e inteligencia artificial, desarrollando sistemas que puedan reconocer emociones automáticamente. Su progreso es especialmente importante en el campo de la investigación del comportamiento humano, debido al papel fundamental que las emociones juegan en muchos procesos psicológicos como la percepción, la toma de decisiones, la creatividad, la memoria y la interacción social. Muchos estudios se han centrado en intentar obtener una metodología fiable para evocar y automáticamente identificar estados emocionales, usando medidas fisiológicas objetivas y métodos de aprendizaje automático. Sin embargo, la mayoría de los estudios previos utilizan imágenes, audios o vídeos para generar los estados emocionales y, hasta donde llega nuestro conocimiento, ninguno de ellos ha desarrollado un sistema de reconocimiento emocional usando RV inmersiva. Aunque algunos trabajos anteriores sí analizan las respuestas fisiológicas en RV inmersivas, estos no presentan modelos de aprendizaje automático para procesamiento y clasificación automática de bioseñales. Además, un concepto crucial cuando se usa la RV en investigación del comportamiento humano es la validez: la capacidad de evocar respuestas similares en un entorno virtual a las evocadas por el espacio físico. Aunque algunos estudios previos han usado dimensiones psicológicas y cognitivas para comparar respuestas entre entornos reales y virtuales, las investigaciones que analizan respuestas fisiológicas o comportamentales están mucho menos extendidas. Según nuestros conocimientos, este es el primer trabajo que compara entornos físicos con su réplica en RV, empleando respuestas fisiológicas y algoritmos de aprendizaje automático y analizando la capacidad de la RV de transferir y extrapolar las conclusiones obtenidas al entorno real que se está simulando. El objetivo principal de la tesis es validar el uso de la RV inmersiva como una herramienta de estimulación emocional usando respuestas psicofisiológicas y comportamentales en combinación con algoritmos de aprendizaje automático, así como realizar una comparación directa entre un entorno real y virtual. Para ello, se ha desarrollado un protocolo experimental que incluye entornos emocionales 360º, un museo real y una virtualización 3D altamente realista del mismo museo. La tesis presenta novedosas contribuciones del uso de la RV inmersiva en la investigación del comportamiento humano, en particular en lo relativo al estudio de las emociones. Esta ayudará a aplicar metodologías a estímulos más realistas para evaluar entornos y situaciones de la vida diaria, superando las actuales limitaciones de la estimulación emocional que clásicamente ha incluido imágenes, audios o vídeos. Además, en ella se analiza la validez de la RV realizando una comparación directa usando una simulación altamente realista. Creemos que la RV inmersiva va a revolucionar los métodos de estimulación emocional en entornos de laboratorio. Además, su sinergia junto a las medidas fisiológicas y las técnicas de aprendizaje automático, impactarán transversalmente en muchas áreas de investigación como la arquitectura, la salud, la evaluación psicológica, el entrenamiento, la educación, la conducción o el marketing, abriendo un nuevo horizonte de oportunidades para la comunidad científica. La presente tesis espera contribuir a caminar en esa senda.[EN] In recent years the scientific community has significantly increased its use of virtual reality (VR) technologies in human behaviour research. In particular, the use of immersive VR has grown due to the introduction of affordable, high performance head mounted displays (HMDs). Among the fields that has strongly emerged in the last decade is affective computing, which combines psychophysiology, computer science, biomedical engineering and artificial intelligence in the development of systems that can automatically recognize emotions. The progress of affective computing is especially important in human behaviour research due to the central role that emotions play in many background processes, such as perception, decision-making, creativity, memory and social interaction. Several studies have tried to develop a reliable methodology to evoke and automatically identify emotional states using objective physiological measures and machine learning methods. However, the majority of previous studies used images, audio or video to elicit emotional statements; to the best of our knowledge, no previous research has developed an emotion recognition system using immersive VR. Although some previous studies analysed physiological responses in immersive VR, they did not use machine learning techniques for biosignal processing and classification. Moreover, a crucial concept when using VR for human behaviour research is validity: the capacity to evoke a response from the user in a simulated environment similar to the response that might be evoked in a physical environment. Although some previous studies have used psychological and cognitive dimensions to compare responses in real and virtual environments, few have extended this research to analyse physiological or behavioural responses. Moreover, to our knowledge, this is the first study to compare VR scenarios with their real-world equivalents using physiological measures coupled with machine learning algorithms, and to analyse the ability of VR to transfer and extrapolate insights obtained from VR environments to real environments. The main objective of this thesis is, using psycho-physiological and behavioural responses in combination with machine learning methods, and by performing a direct comparison between a real and virtual environment, to validate immersive VR as an emotion elicitation tool. To do so we develop an experimental protocol involving emotional 360º environments, an art exhibition in a real museum, and a highly-realistic 3D virtualization of the same art exhibition. This thesis provides novel contributions to the use of immersive VR in human behaviour research, particularly in relation to emotions. VR can help in the application of methodologies designed to present more realistic stimuli in the assessment of daily-life environments and situations, thus overcoming the current limitations of affective elicitation, which classically uses images, audio and video. Moreover, it analyses the validity of VR by performing a direct comparison using highly-realistic simulation. We believe that immersive VR will revolutionize laboratory-based emotion elicitation methods. Moreover, its synergy with physiological measurement and machine learning techniques will impact transversely in many other research areas, such as architecture, health, assessment, training, education, driving and marketing, and thus open new opportunities for the scientific community. The present dissertation aims to contribute to this progress.[CA] L'ús de la realitat virtual (RV) s'ha incrementat notablement en la comunitat científica per a la recerca del comportament humà. En particular, la RV immersiva ha crescut a causa de la democratització de les ulleres de realitat virtual o head mounted displays (HMD), que ofereixen un alt rendiment amb una reduïda inversió econòmica. Un dels camps que ha emergit amb força en l'última dècada és el Affective Computing, que combina psicofisiologia, informàtica, enginyeria biomèdica i intel·ligència artificial, desenvolupant sistemes que puguen reconéixer emocions automàticament. El seu progrés és especialment important en el camp de la recerca del comportament humà, a causa del paper fonamental que les emocions juguen en molts processos psicològics com la percepció, la presa de decisions, la creativitat, la memòria i la interacció social. Molts estudis s'han centrat en intentar obtenir una metodologia fiable per a evocar i automàticament identificar estats emocionals, utilitzant mesures fisiològiques objectives i mètodes d'aprenentatge automàtic. No obstant això, la major part dels estudis previs utilitzen imatges, àudios o vídeos per a generar els estats emocionals i, fins on arriba el nostre coneixement, cap d'ells ha desenvolupat un sistema de reconeixement emocional mitjançant l'ús de la RV immersiva. Encara que alguns treballs anteriors sí que analitzen les respostes fisiològiques en RV immersives, aquests no presenten models d'aprenentatge automàtic per a processament i classificació automàtica de biosenyals. A més, un concepte crucial quan s'utilitza la RV en la recerca del comportament humà és la validesa: la capacitat d'evocar respostes similars en un entorn virtual a les evocades per l'espai físic. Encara que alguns estudis previs han utilitzat dimensions psicològiques i cognitives per a comparar respostes entre entorns reals i virtuals, les recerques que analitzen respostes fisiològiques o comportamentals estan molt menys esteses. Segons els nostres coneixements, aquest és el primer treball que compara entorns físics amb la seua rèplica en RV, emprant respostes fisiològiques i algorismes d'aprenentatge automàtic i analitzant la capacitat de la RV de transferir i extrapolar les conclusions obtingudes a l'entorn real que s'està simulant. L'objectiu principal de la tesi és validar l'ús de la RV immersiva com una eina d'estimulació emocional usant respostes psicofisiològiques i comportamentals en combinació amb algorismes d'aprenentatge automàtic, així com realitzar una comparació directa entre un entorn real i virtual. Per a això, s'ha desenvolupat un protocol experimental que inclou entorns emocionals 360º, un museu real i una virtualització 3D altament realista del mateix museu. La tesi presenta noves contribucions de l'ús de la RV immersiva en la recerca del comportament humà, en particular quant a l'estudi de les emocions. Aquesta ajudarà a aplicar metodologies a estímuls més realistes per a avaluar entorns i situacions de la vida diària, superant les actuals limitacions de l'estimulació emocional que clàssicament ha inclòs imatges, àudios o vídeos. A més, en ella s'analitza la validesa de la RV realitzant una comparació directa usant una simulació altament realista. Creiem que la RV immersiva revolucionarà els mètodes d'estimulació emocional en entorns de laboratori. A més, la seua sinergia al costat de les mesures fisiològiques i les tècniques d'aprenentatge automàtic, impactaran transversalment en moltes àrees de recerca com l'arquitectura, la salut, l'avaluació psicològica, l'entrenament, l'educació, la conducció o el màrqueting, obrint un nou horitzó d'oportunitats per a la comunitat científica. La present tesi espera contribuir a caminar en aquesta senda.Marín Morales, J. (2020). Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/148717TESISCompendi

    From locomotion to dance and back : exploring rhythmic sensorimotor synchronization

    Full text link
    Le rythme est un aspect important du mouvement et de la perception de l’environnement. Lorsque l’on danse, la pulsation musicale induit une activité neurale oscillatoire qui permet au système nerveux d’anticiper les évènements musicaux à venir. Le système moteur peut alors s’y synchroniser. Cette thèse développe de nouvelles techniques d’investigation des rythmes neuraux non strictement périodiques, tels que ceux qui régulent le tempo naturellement variable de la marche ou la perception rythmes musicaux. Elle étudie des réponses neurales reflétant la discordance entre ce que le système nerveux anticipe et ce qu’il perçoit, et qui sont nécessaire pour adapter la synchronisation de mouvements à un environnement variable. Elle montre aussi comment l’activité neurale évoquée par un rythme musical complexe est renforcée par les mouvements qui y sont synchronisés. Enfin, elle s’intéresse à ces rythmes neuraux chez des patients ayant des troubles de la marche ou de la conscience.Rhythms are central in human behaviours spanning from locomotion to music performance. In dance, self-sustaining and dynamically adapting neural oscillations entrain to the regular auditory inputs that is the musical beat. This entrainment leads to anticipation of forthcoming sensory events, which in turn allows synchronization of movements to the perceived environment. This dissertation develops novel technical approaches to investigate neural rhythms that are not strictly periodic, such as naturally tempo-varying locomotion movements and rhythms of music. It studies neural responses reflecting the discordance between what the nervous system anticipates and the actual timing of events, and that are critical for synchronizing movements to a changing environment. It also shows how the neural activity elicited by a musical rhythm is shaped by how we move. Finally, it investigates such neural rhythms in patient with gait or consciousness disorders

    Infants' perception of sound patterns in oral language play

    Get PDF

    Utilising Emotion Monitoring for Developing Music Interventions for People with Dementia:A State-of-the-Art Review

    Get PDF
    The demand for smart solutions to support people with dementia (PwD) is increasing. These solutions are expected to assist PwD with their emotional, physical, and social well-being. At the moment, state-of-the-art works allow for the monitoring of physical well-being; however, not much attention is delineated for monitoring the emotional and social well-being of PwD. Research on emotion monitoring can be combined with research on the effects of music on PwD given its promising effects. More specifically, knowledge of the emotional state allows for music intervention to alleviate negative emotions by eliciting positive emotions in PwD. In this direction, the paper conducts a state-of-the-art review on two aspects: (i) the effect of music on PwD and (ii) both wearable and non-wearable sensing systems for emotional state monitoring. After outlining the application of musical interventions for PwD, including emotion monitoring sensors and algorithms, multiple challenges are identified. The main findings include a need for rigorous research approaches for the development of adaptable solutions that can tackle dynamic changes caused by the diminishing cognitive abilities of PwD with a focus on privacy and adoption aspects. By addressing these requirements, advancements can be made in harnessing music and emotion monitoring for PwD, thereby facilitating the creation of more resilient and scalable solutions to aid caregivers and PwD

    The neurobiology of cortical music representations

    Get PDF
    Music is undeniable one of humanity’s defining traits, as it has been documented since the earliest days of mankind, is present in all knowcultures and perceivable by all humans nearly alike. Intrigued by its omnipresence, researchers of all disciplines started the investigation of music’s mystical relationship and tremendous significance to humankind already several hundred years ago. Since comparably recently, the immense advancement of neuroscientific methods also enabled the examination of cognitive processes related to the processing of music. Within this neuroscience ofmusic, the vast majority of research work focused on how music, as an auditory stimulus, reaches the brain and howit is initially processed, aswell as on the tremendous effects it has on and can evoke through the human brain. However, intermediate steps, that is how the human brain achieves a transformation of incoming signals to a seemingly specialized and abstract representation of music have received less attention. Aiming to address this gap, the here presented thesis targeted these transformations, their possibly underlying processes and how both could potentially be explained through computational models. To this end, four projects were conducted. The first two comprised the creation and implementation of two open source toolboxes to first, tackle problems inherent to auditory neuroscience, thus also affecting neuroscientific music research and second, provide the basis for further advancements through standardization and automation. More precisely, this entailed deteriorated hearing thresholds and abilities in MRI settings and the aggravated localization and parcellation of the human auditory cortex as the core structure involved in auditory processing. The third project focused on the human’s brain apparent tuning to music by investigating functional and organizational principles of the auditory cortex and network with regard to the processing of different auditory categories of comparable social importance, more precisely if the perception of music evokes a is distinct and specialized pattern. In order to provide an in depth characterization of the respective patterns, both the segregation and integration of auditory cortex regions was examined. In the fourth and final project, a highly multimodal approach that included fMRI, EEG, behavior and models of varying complexity was utilized to evaluate how the aforementioned music representations are generated along the cortical hierarchy of auditory processing and how they are influenced by bottom-up and top-down processes. The results of project 1 and 2 demonstrated the necessity for the further advancement of MRI settings and definition of working models of the auditory cortex, as hearing thresholds and abilities seem to vary as a function of the used data acquisition protocol and the localization and parcellation of the human auditory cortex diverges drastically based on the approach it is based one. Project 3 revealed that the human brain apparently is indeed tuned for music by means of a specialized representation, as it evoked a bilateral network with a right hemispheric weight that was not observed for the other included categories. The result of this specialized and hierarchical recruitment of anterior and posterior auditory cortex regions was an abstract music component ix x SUMMARY that is situated in anterior regions of the superior temporal gyrus and preferably encodes music, regardless of sung or instrumental. The outcomes of project 4 indicated that even though the entire auditory cortex, again with a right hemispheric weight, is involved in the complex processing of music in particular, anterior regions yielded an abstract representation that varied excessively over time and could not sufficiently explained by any of the tested models. The specialized and abstract properties of this representation was furthermore underlined by the predictive ability of the tested models, as models that were either based on high level features such as behavioral representations and concepts or complex acoustic features always outperformed models based on single or simpler acoustic features. Additionally, factors know to influence auditory and thus music processing, like musical training apparently did not alter the observed representations. Together, the results of the projects suggest that the specialized and stable cortical representation of music is the outcome of sophisticated transformations of incoming sound signals along the cortical hierarchy of auditory processing that generate a music component in anterior regions of the superior temporal gyrus by means of top-down processes that interact with acoustic features, guiding their processing.Musik ist unbestreitbarer Weise eine der definierenden Eigenschaften des Menschen. Dokumentiert seit den frühesten Tagen der Menschheit und in allen bekannten Kulturen vorhanden, ist sie von allenMenschen nahezu gleichwahrnehmbar. Fasziniert von ihrerOmnipräsenz haben Wissenschaftler aller Disziplinen vor einigen hundert Jahren begonnen die mystische Beziehung zwischen Musik und Mensch, sowie ihre enorme Bedeutung für selbigen zu untersuchen. Seit einem vergleichsweise kurzem Zeitraum ist es durch den immensen Fortschritt neurowissenschafticher Methoden auch möglich die kognitiven Prozesse, welche an der Verarbeitung von Musik beteiligt, sind zu untersuchen. Innerhalb dieser Neurowissenschaft der Musik hat sich ein Großteil der Forschungsarbeit darauf konzentriert wie Musik, als auditorischer Stimulus, das menschliche Gehirn erreicht und wie sie initial verarbeitet wird, als auch welche kolossallen Effekte sie auf selbiges hat und auch dadurch bewirken kann. Jedoch haben die Zwischenschritte, also wie das menschliche Gehirn eintreffende Signale in eine scheinbar spezialisierte und abstrakte Repräsentation vonMusik umwandelt, vergleichsweise wenig Aufmerksamkeit erhalten. Um die dadurch entstandene Lücke zu adressieren, hat die hier vorliegende Dissertation diese Prozesse und wie selbige durch Modelle erklärt werden können in vier Projekten untersucht. Die ersten beiden Projekte beinhalteten die Herstellung und Implementierung von zwei Toolboxen um erstens, inhärente Probleme der auditorischen Neurowissenschaft, daher auch neurowissenschaftlicher Untersuchungen von Musik, zu verbessern und zweitens, eine Basis für weitere Fortschritte durch Standardisierung und Automatisierung zu schaffen. Im genaueren umfasste dies die stark beeinträchtigten Hörschwellen und –fähigkeiten in MRT-Untersuchungen und die erschwerte Lokalisation und Parzellierung des menschlichen auditorischen Kortex als Kernstruktur auditiver Verarbeitung. Das dritte Projekt befasste sich mit der augenscheinlichen Spezialisierung von Musik im menschlichen Gehirn durch die Untersuchung funktionaler und organisatorischer Prinzipien des auditorischen Kortex und Netzwerks bezüglich der Verarbeitung verschiedener auditorischer Kategorien vergleichbarer sozialer Bedeutung, im genaueren ob die Wahrnehmung von Musik ein distinktes und spezialisiertes neuronalenMuster hervorruft. Umeine ausführliche Charakterisierung der entsprechenden neuronalen Muster zu ermöglichen wurde die Segregation und Integration der Regionen des auditorischen Kortex untersucht. Im vierten und letzten Projekt wurde ein hochmultimodaler Ansatz,welcher fMRT, EEG, Verhalten undModelle verschiedener Komplexität beinhaltete, genutzt, umzu evaluieren, wie die zuvor genannten Repräsentationen von Musik entlang der kortikalen Hierarchie der auditorischen Verarbeitung generiert und wie sie möglicherweise durch Bottom-up- und Top-down-Ansätze beeinflusst werden. Die Ergebnisse von Projekt 1 und 2 demonstrierten die Notwendigkeit für weitere Verbesserungen von MRTUntersuchungen und die Definition eines Funktionsmodells des auditorischen Kortex, daHörxi xii ZUSAMMENFASSUNG schwellen und –fähigkeiten stark in Abhängigkeit der verwendeten Datenerwerbsprotokolle variierten und die Lokalisation, sowie Parzellierung des menschlichen auditorischen Kortex basierend auf den zugrundeliegenden Ansätzen drastisch divergiert. Projekt 3 zeigte, dass das menschliche Gehirn tatsächlich eine spezialisierte Repräsentation vonMusik enthält, da selbige als einzige auditorische Kategorie ein bilaterales Netzwerk mit rechtshemisphärischer Gewichtung evozierte. Aus diesemNetzwerk, welches die Rekrutierung anteriorer und posteriorer Teile des auditorischen Kortex beinhaltete, resultierte eine scheinbar abstrakte Repräsentation von Musik in anterioren Regionen des Gyrus temporalis superior, welche präferiert Musik enkodiert, ungeachtet ob gesungen oder instrumental. Die Resultate von Projekt 4 deuten darauf hin, dass der gesamte auditorische Kortex, erneut mit rechtshemisphärischer Gewichtung, an der komplexen Verarbeitung vonMusik beteiligt ist, besonders aber anteriore Regionen, die bereits genannten abstrakte Repräsentation hervorrufen, welche sich exzessiv über die Zeitdauer derWahrnehmung verändert und nicht hinreichend durch eines der getestetenModelle erklärt werden kann. Die spezialisierten und abstrakten Eigenschaften dieser Repräsentationen wurden weiterhin durch die prädiktiven Fähigkeiten der getestetenModelle unterstrichen, daModelle, welche entweder auf höheren Eigenschaften wie Verhaltensrepräsentationen und mentalen Konzepten oder komplexen akustischen Eigenschaften basierten, stets Modelle, welche auf niederen Attributen wie simplen akustischen Eigenschaften basierten, übertrafen. Zusätzlich konnte kein Effekt von Faktoren, wie z.B. musikalisches Training, welche bekanntermaßen auditorische und daherMusikverarbeitung beeinflussen, nachgewiesen werden. Zusammengefasst deuten die Ergebnisse der Projekte darauf, hin dass die spezialisierte und stabile kortikale Repräsentation vonMusik ein Resultat komplexer Prozesse ist, welche eintreffende Signale entlang der kortikalen Hierarchie auditorischer Verarbeitung in eine abstrakte Repräsentation vonMusik innerhalb anteriorer Regionen des Gyrus temporalis superior durch Top-Down-Prozesse, welche mit akustischen Eigenschaften interagieren und deren Verarbeitung steuern, umwandeln

    Biosignalų požymių regos diskomfortui vertinti išskyrimas ir tyrimas

    Get PDF
    Comfortable stereoscopic perception continues to be an essential area of research. The growing interest in virtual reality content and increasing market for head-mounted displays (HMDs) still cause issues of balancing depth perception and comfortable viewing. Stereoscopic views are stimulating binocular cues – one type of several available human visual depth cues which becomes conflicting cues when stereoscopic displays are used. Depth perception by binocular cues is based on matching of image features from one retina with corresponding features from the second retina. It is known that our eyes can tolerate small amounts of retinal defocus, which is also known as Depth of Focus. When magnitudes are larger, a problem of visual discomfort arises. The research object of the doctoral dissertation is a visual discomfort level. This work aimed at the objective evaluation of visual discomfort, based on physiological signals. Different levels of disparity and the number of details in stereoscopic views in some cases make it difficult to find the focus point for comfortable depth perception quickly. During this investigation, a tendency for differences in single sensor-based electroencephalographic EEG signal activity at specific frequencies was found. Additionally, changes in eye tracker collected gaze signals were also found. A dataset of EEG and gaze signal records from 28 control subjects was collected and used for further evaluation. The dissertation consists of an introduction, three chapters and general conclusions. The first chapter reveals the fundamental knowledge ways of measuring visual discomfort based on objective and subjective methods. In the second chapter theoretical research results are presented. This research was aimed to investigate methods which use physiological signals to detect changes on the level of sense of presence. Results of the experimental research are presented in the third chapter. This research aimed to find differences in collected physiological signals when a level of visual discomfort changes. An experiment with 28 control subjects was conducted to collect these signals. The results of the thesis were published in six scientific publications – three in peer-reviewed scientific papers, three in conference proceedings. Additionally, the results of the research were presented in 8 conferences.Dissertatio

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    Psychophysiology-based QoE assessment : a survey

    Get PDF
    We present a survey of psychophysiology-based assessment for quality of experience (QoE) in advanced multimedia technologies. We provide a classification of methods relevant to QoE and describe related psychological processes, experimental design considerations, and signal analysis techniques. We summarize multimodal techniques and discuss several important aspects of psychophysiology-based QoE assessment, including the synergies with psychophysical assessment and the need for standardized experimental design. This survey is not considered to be exhaustive but serves as a guideline for those interested to further explore this emerging field of research
    corecore