4 research outputs found
Reconocimiento de Estados Afectivos a partir de Señales Biomédicas
Las emociones constituyen una parte fundamental de los individuos, influyendo en sucomunicación diaria, la toma de decisiones y el foco de atención. La incorporación de las emociones en la tecnología ha avanzado en losúltimos años, desde estudios exploratorios en la respuesta a los estímulos, a aplicaciones comerciales en interfaces hombre-máquina. Una de las fuentes paraidentificar estados emocionales es la respuesta fisiológica, registrada medianteseñales biomédicas. El uso de estas señales permitiría el desarrollo de dispositivos poco invasivos, como por ejemplo una pulsera, que puedan registrarseñales continuamente, en diferentes condiciones, y manteniendo la privacidad delos usuarios. Existen numerosos enfoques para el reconocimiento de afectos, condiferentes señales, técnicas de procesamiento de la señal y métodos deaprendizaje automático. Entre ellos, la combinación demúltiples señales se utilizó ampliamente para mejorar las tasas de reconocimiento,pero resulta inviable en la práctica por su invasividad. Los desafíosactuales requieren clasificadores que puedan funcionar en tiempo real, enaplicaciones interactivas, y con mayor comodidad para el usuario. En esta tesis doctoral se aborda el desafío del reconocimiento de estadosafectivos en varios aspectos. Se revisan las propiedades de cada señalfisiológica en términos de su practicidad y potencial. Se propone un método paraadaptar un clasificador a nuevos usuarios, estimando parámetros fisiológicosbasales. Luego se presentan dos métodos originales paramejorar las tasas de reconocimiento. El primero es un método supervisado basadoen mapas auto-organizativos (sSOM). Este método permite representar los espacios de características fisiológicas ymodelos emocionales, para analizar las relaciones en los datos. El otro estabasado en máquinas de aprendizaje extremo (ELM),una novedosa familia de redes neuronales artificiales que tiene gran poder degeneralización y puede entrenarse con pocos datos. Los métodos fueron evaluados y comparados con los del estadodel arte, en corpus realistas y de acceso libre. Los resultados obtenidos muestran avances en relación al estado del arte para el problema. Elmétodo de adaptación permite, a partir de pocos segundos,mejorar las tasas de reconocimiento en tiempo real, aproximando los resultados delreconocimiento que se podría hacer con posterioridad, sobre los registros completos. Utilizando una única señal de actividad cardiovascular, en particularla variabilidad del ritmo cardíaco (HRV), se lograron avances prometedores, con diferencias significativasen relación a los resultados obtenidos por los métodos del estado del arte. LasELM obtuvieron excelentes resultados y con bajo costo computacional, por lo queserían útiles para aplicaciones móviles. El sSOMlogra resultados similares, con la ventaja de proveer a la vez una herramientapara representar y analizar los espacios complejos de la fisiología y lasemociones, en una forma compacta.Fil: Bugnon, Leandro Ariel. Universidad Nacional del Litoral; Argentin
Fear Classification using Affective Computing with Physiological Information and Smart-Wearables
Mención Internacional en el título de doctorAmong the 17 Sustainable Development Goals proposed within the 2030 Agenda
and adopted by all of the United Nations member states, the fifth SDG is a call
for action to effectively turn gender equality into a fundamental human right and
an essential foundation for a better world. It includes the eradication of all types
of violence against women. Focusing on the technological perspective, the range of
available solutions intended to prevent this social problem is very limited. Moreover,
most of the solutions are based on a panic button approach, leaving aside
the usage and integration of current state-of-the-art technologies, such as the Internet
of Things (IoT), affective computing, cyber-physical systems, and smart-sensors.
Thus, the main purpose of this research is to provide new insight into the design and
development of tools to prevent and combat Gender-based Violence risky situations
and, even, aggressions, from a technological perspective, but without leaving aside
the different sociological considerations directly related to the problem. To achieve
such an objective, we rely on the application of affective computing from a realist
point of view, i.e. targeting the generation of systems and tools capable of being implemented
and used nowadays or within an achievable time-frame. This pragmatic
vision is channelled through: 1) an exhaustive study of the existing technological
tools and mechanisms oriented to the fight Gender-based Violence, 2) the proposal
of a new smart-wearable system intended to deal with some of the current technological
encountered limitations, 3) a novel fear-related emotion classification approach
to disentangle the relation between emotions and physiology, and 4) the definition
and release of a new multi-modal dataset for emotion recognition in women.
Firstly, different fear classification systems using a reduced set of physiological signals are explored and designed. This is done by employing open datasets together
with the combination of time, frequency and non-linear domain techniques. This
design process is encompassed by trade-offs between both physiological considerations
and embedded capabilities. The latter is of paramount importance due to
the edge-computing focus of this research. Two results are highlighted in this first
task, the designed fear classification system that employed the DEAP dataset data
and achieved an AUC of 81.60% and a Gmean of 81.55% on average for a subjectindependent
approach, and only two physiological signals; and the designed fear
classification system that employed the MAHNOB dataset data achieving an AUC
of 86.00% and a Gmean of 73.78% on average for a subject-independent approach,
only three physiological signals, and a Leave-One-Subject-Out configuration. A detailed
comparison with other emotion recognition systems proposed in the literature
is presented, which proves that the obtained metrics are in line with the state-ofthe-
art.
Secondly, Bindi is presented. This is an end-to-end autonomous multimodal system
leveraging affective IoT throughout auditory and physiological commercial off-theshelf
smart-sensors, hierarchical multisensorial fusion, and secured server architecture
to combat Gender-based Violence by automatically detecting risky situations
based on a multimodal intelligence engine and then triggering a protection protocol.
Specifically, this research is focused onto the hardware and software design of one of
the two edge-computing devices within Bindi. This is a bracelet integrating three
physiological sensors, actuators, power monitoring integrated chips, and a System-
On-Chip with wireless capabilities. Within this context, different embedded design
space explorations are presented: embedded filtering evaluation, online physiological
signal quality assessment, feature extraction, and power consumption analysis.
The reported results in all these processes are successfully validated and, for some
of them, even compared against physiological standard measurement equipment.
Amongst the different obtained results regarding the embedded design and implementation
within the bracelet of Bindi, it should be highlighted that its low power
consumption provides a battery life to be approximately 40 hours when using a 500
mAh battery.
Finally, the particularities of our use case and the scarcity of open multimodal datasets dealing with emotional immersive technology, labelling methodology considering
the gender perspective, balanced stimuli distribution regarding the target
emotions, and recovery processes based on the physiological signals of the volunteers
to quantify and isolate the emotional activation between stimuli, led us to the definition
and elaboration of Women and Emotion Multi-modal Affective Computing
(WEMAC) dataset. This is a multimodal dataset in which 104 women who never
experienced Gender-based Violence that performed different emotion-related stimuli
visualisations in a laboratory environment. The previous fear binary classification
systems were improved and applied to this novel multimodal dataset. For instance,
the proposed multimodal fear recognition system using this dataset reports up to
60.20% and 67.59% for ACC and F1-score, respectively. These values represent a
competitive result in comparison with the state-of-the-art that deal with similar
multi-modal use cases.
In general, this PhD thesis has opened a new research line within the research group
under which it has been developed. Moreover, this work has established a solid base
from which to expand knowledge and continue research targeting the generation of
both mechanisms to help vulnerable groups and socially oriented technology.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: David Atienza Alonso.- Secretaria: Susana Patón Álvarez.- Vocal: Eduardo de la Torre Arnan