2,422 research outputs found

    Emotional State Recognition Based on Physiological Signals

    Get PDF
    Emotsionaalsete seisundite tuvastamine on väga tähtis inimese ja arvuti vahelise suhtlemise (Human-Computer Interaction, HCI) jaoks. Tänapäeval leiavad masinõppe meetodid ühe enam rakendust paljudes inimtegevuse valdkondades. Viimased uuringud näitavad, et füsioloogiliste signaalide analüüs masinõppe meetoditega võiks võimaldada inimese emotsionaalse seisundi tuvastamist hea täpsusega. Vaadates emotsionaalse sisuga videosid, või kuulates helisid, tekib inimesel spetsifiline füsiloogiline vastus. Antud uuringus me kasutame masinõpet ja heuristilist lähenemist, et tuvastada emotsionaalseid seisundeid füsioloogiliste signaalide põhjal. Meetodite võrdlus näitas, et kõrgeim täpsus saavutati juhuslike metsade (Random Forest) meetodiga rakendades seda EEG signaalile, mis teisendati sagedusintervallideks. Ka kombineerides EEG-d teiste füsioloogiliste signaalidega oli tuvastamise täpsus suhteliselt kõrge. Samas heuristilised meetodid ja EEG signaali klassifitseerimise rekurrentse närvivõrkude abil ebaõnnestusid. Andmeallikaks oli MAHNOB-HCI mitmemodaalne andmestik, mis koosneb 27 isikult kogutud füsioloogilistest signaalidest, kus igaüks neist vaatas 20 emotsionaalset videolõiku. Ootamatu tulemusena saime teada, et klassikaline Eckman'i emotsionaalsete seisundite nimekiri oli parem emotsioonide kirjeldamiseks ja klassifitseerimiseks kui kaasaegne mudel, mis esitab emotsioone valentsuse ja ärrituse teljestikul. Meie töö näitab, et emotsiooni märgistamise meetod on väga tähtis hea klassifitseerimismudeli loomiseks, ning et kasutatav andmestik peab sobima masinõppe meetodite jaoks. Saadud tulemused võivad aidata valida õigeid füsioloogilisi signaale ja emotsioonide märkimise meetodeid uue andmestiku loomisel ja töötlemisel.Emotional state recognition is a crucial task for achieving a new level of Human-Computer Interaction (HCI). Machine Learning applications penetrate more and more spheres of everyday life. Recent studies are showing promising results in analyzing physiological signals (EEG, ECG, GSR) using Machine Learning for accessing emotional state. Commonly, specific emotion is invoked by playing affective videos or sounds. However, there is no canonical way for emotional state interpretation. In this study, we classified affective physiological signals with labels obtained from two emotional state estimation approaches using machine learning algorithms and heuristic formulas. Comparison of the method has shown that the highest accuracy was achieved using Random Forest classifier on spectral features from the EEG records, a combination of features for the peripheral physiological signal also shown relatively high classification performance. However, heuristic formulas and novel approach for ECG signal classification using recurrent neural network ultimately failed. Data was taken from the MAHNOB-HCI dataset which is a multimodal database collected on 27 subjects by showing 20 emotional movie fragment`s. We obtained an unexpected result, that description of emotional states using discrete Eckman's paradigm provides better classification results comparing to the contemporary dimensional model which represents emotions by matching them onto the Cartesian plane with valence and arousal axis. Our study shows the importance of label selection in emotion recognition task. Moreover, obtained dataset have to be suitable for Machine Learning algorithms. Acquired results may help to select proper physiological signals and emotional labels for further dataset creation and post-processing

    Prerequisites for Affective Signal Processing (ASP)

    Get PDF
    Although emotions are embraced by science, their recognition has not reached a satisfying level. Through a concise overview of affect, its signals, features, and classification methods, we provide understanding for the problems encountered. Next, we identify the prerequisites for successful Affective Signal Processing: validation (e.g., mapping of constructs on signals), triangulation, a physiology-driven approach, and contributions of the signal processing community. Using these directives, a critical analysis of a real-world case is provided. This illustrates that the prerequisites can become a valuable guide for Affective Signal Processing (ASP)

    Characterization of the autonomic nervous system response under emotional stimuli through linear and non-linear analysis of physiological signals

    Get PDF
    En esta disertación se presentan metodologías lineales y no lineales aplicadas a señales fisiológicas, con el propósito de caracterizar la respuesta del sistema nervioso autónomo bajo estímulos emocionales. Este estudio está motivado por la necesidad de desarrollar una herramienta que identifique emociones en función de su efecto sobre la actividad cardíaca, ya que puede tener un impacto potencial en la práctica clínica para diagnosticar enfermedades psico-neuronales.Las hipótesis de esta tesis doctoral son que las emociones inducen cambios notables en el sistema nervioso autónomo y que estos cambios pueden capturarse a partir del análisis de señales fisiológicas, en particular, del análisis conjunto de la variabilidad del ritmo cardíaco (HRV) y la respiración.La base de datos analizada contiene el registro simultáneo del electrocardiograma y la respiración de 25 sujetos elicitados con emociones inducidas por vídeos, incluyendo las siguientes emociones: alegría, miedo, tristeza e ira.En esta disertación se describen dos estudios metodológicos.En el primer estudio se propone un método basado en el análisis lineal de la HRV guiado por la respiración. El método se basó en la redefinición de la banda de alta frecuencia (HF), no solo centrándose en la frecuencia respiratoria, sino también considerando un ancho de banda que dependiera del espectro respiratorio. Primero, el método se validó con señales de HRV simuladas, obteniéndose errores mínimos de estimación en comparación con la definición de la banda de HF clásica e incluso con la banda de HF centrada en la frecuencia respiratoria pero con un ancho de banda constante, independientemente de los valores del ratio simpático-vagal.Después, el método propuesto se aplicó en una base de datos de elicitación emocional inducida mediante vídeos para discriminar entre emociones. No solo la banda de HF redefinida propuesta superó a las otras definiciones de banda de HF en discriminación emocional, sino también la correlación máxima entre los espectros de la HRV y de la respiración discriminó alegría y relajación, alegría y cada emoción de valencia negativa y entre miedo y tristeza con un p-valor ≤ 0.05 y AUC ≥ 0.70.En el segundo estudio, técnicas no lineales como la Función de Auto Información Mutua y la Función de Información Mutua Cruzada, AMIF y CMIF respectivamente, son también propuestas en esta tesis doctoral para el reconocimiento de emociones humanas. La técnica AMIF se aplicó a las señales de HRV para estudiar interdependencias complejas, y se consideró la técnica CMIF para cuantificar el acoplamiento complejo entre las señales de HRV y de respiración. Ambos algoritmos se adaptaron a las series temporales RR de corta duración. Las series RR fueron filtradas en las bandas de baja y alta frecuencia, y también se investigaron las series RR filtradas en un ancho de banda basado en la respiración.Los resultados revelaron que la técnica AMIF aplicada a la serie temporal RR filtrada en la banda de HF redefinida fue capaz de discriminar entre: relajación y alegría y miedo, alegría y cada valencia negativa y finalmente miedo y tristeza e ira, todos con un nivel de significación estadística (p-valor ≤ 0.05, AUC ≥ 0.70). Además, los parámetros derivados de AMIF y CMIF permitieron caracterizar la baja complejidad que la señal presentaba durante el miedo frente a cualquier otro estado emocional estudiado.Finalmente se investiga, mediante un clasificador lineal, las características lineales y no lineales que discriminan entre pares de emociones y entre valencias emocionales para determinar qué parámetros permiten diferenciar los grupos y cuántos de éstos son necesarios para lograr la mejor clasificación posible. Los resultados extraídos de este capítulo sugieren que pueden ser clasificadas mediante el análisis de la HRV: relajación y alegría, la valencia positiva frente a todas las negativas, alegría y miedo, alegría y tristeza, alegría e ira, y miedo y tristeza.El análisis conjunto de la HRV y la respiración aumenta la capacidad discriminatoria de la HRV, siendo la máxima correlación entre los espectros de la HRV y la respiración uno de los mejores índices para la discriminación de emociones. El análisis de la información mutua, aun en señales de corta duración, añade información relevante a los índices lineales para la discriminación de emociones.<br /

    VIF: Virtual Interactive Fiction (with a twist)

    Get PDF
    Nowadays computer science can create digital worlds that deeply immerse users; it can also process in real time brain activity to infer their inner states. What marvels can we achieve with such technologies? Go back to displaying text. And unfold a story that follows and molds users as never before.Comment: Pervasive Play - CHI '16 Workshop, May 2016, San Jose, United State

    Human emotion characterization by heart rate variability analysis guided by respiration

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksDeveloping a tool which identifies emotions based on their effect on cardiac activity may have a potential impact on clinical practice, since it may help in the diagnosing of psycho-neural illnesses. In this study, a method based on the analysis of heart rate variability (HRV) guided by respiration is proposed. The method was based on redefining the high frequency (HF) band, not only to be centered at the respiratory frequency, but also to have a bandwidth dependent on the respiratory spectrum. The method was first tested using simulated HRV signals, yielding the minimum estimation errors as compared to classical and respiratory frequency centered at HF band based definitions, independently of the values of the sympathovagal ratio. Then, the proposed method was applied to discriminate emotions in a database of video-induced elicitation. Five emotional states, relax, joy, fear, sadness and anger, were considered. The maximum correlation between HRV and respiration spectra discriminated joy vs. relax, joy vs. each negative valence emotion, and fear vs. sadness with p-value = 0.05 and AUC = 0.70. Based on these results, human emotion characterization may be improved by adding respiratory information to HRV analysis.Peer ReviewedPostprint (author's final draft

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Investigation of Methods to Create Future Multimodal Emotional Data for Robot Interactions in Patients with Schizophrenia : A Case Study

    Get PDF
    Rapid progress in humanoid robot investigations offers possibilities for improving the competencies of people with social disorders, although this improvement of humanoid robots remains unexplored for schizophrenic people. Methods for creating future multimodal emotional data for robot interactions were studied in this case study of a 40-year-old male patient with disorganized schizophrenia without comorbidities. The qualitative data included heart rate variability (HRV), video-audio recordings, and field notes. HRV, Haar cascade classifier (HCC), and Empath API© were evaluated during conversations between the patient and robot. Two expert nurses and one psychiatrist evaluated facial expressions. The research hypothesis questioned whether HRV, HCC, and Empath API© are useful for creating future multimodal emotional data about robot–patient interactions. The HRV analysis showed persistent sympathetic dominance, matching the human–robot conversational situation. The result of HCC was in agreement with that of human observation, in the case of rough consensus. In the case of observed results disagreed upon by experts, the HCC result was also different. However, emotional assessments by experts using Empath API© were also found to be inconsistent. We believe that with further investigation, a clearer identification of methods for multimodal emotional data for robot interactions can be achieved for patients with schizophrenia
    corecore