256 research outputs found
Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses
Tesis por compendio[ES] El uso de la realidad virtual (RV) se ha incrementado notablemente en la comunidad científica para la investigación del comportamiento humano. En particular, la RV inmersiva ha crecido debido a la democratización de las gafas de realidad virtual o head mounted displays (HMD), que ofrecen un alto rendimiento con una inversión económica. Uno de los campos que ha emergido con fuerza en la última década es el Affective Computing, que combina psicofisiología, informática, ingeniería biomédica e inteligencia artificial, desarrollando sistemas que puedan reconocer emociones automáticamente. Su progreso es especialmente importante en el campo de la investigación del comportamiento humano, debido al papel fundamental que las emociones juegan en muchos procesos psicológicos como la percepción, la toma de decisiones, la creatividad, la memoria y la interacción social.
Muchos estudios se han centrado en intentar obtener una metodología fiable para evocar y automáticamente identificar estados emocionales, usando medidas fisiológicas objetivas y métodos de aprendizaje automático. Sin embargo, la mayoría de los estudios previos utilizan imágenes, audios o vídeos para generar los estados emocionales y, hasta donde llega nuestro conocimiento, ninguno de ellos ha desarrollado un sistema de reconocimiento emocional usando RV inmersiva. Aunque algunos trabajos anteriores sí analizan las respuestas fisiológicas en RV inmersivas, estos no presentan modelos de aprendizaje automático para procesamiento y clasificación automática de bioseñales.
Además, un concepto crucial cuando se usa la RV en investigación del comportamiento humano es la validez: la capacidad de evocar respuestas similares en un entorno virtual a las evocadas por el espacio físico. Aunque algunos estudios previos han usado dimensiones psicológicas y cognitivas para comparar respuestas entre entornos reales y virtuales, las investigaciones que analizan respuestas fisiológicas o comportamentales están mucho menos extendidas. Según nuestros conocimientos, este es el primer trabajo que compara entornos físicos con su réplica en RV, empleando respuestas fisiológicas y algoritmos de aprendizaje automático y analizando la capacidad de la RV de transferir y extrapolar las conclusiones obtenidas al entorno real que se está simulando.
El objetivo principal de la tesis es validar el uso de la RV inmersiva como una herramienta de estimulación emocional usando respuestas psicofisiológicas y comportamentales en combinación con algoritmos de aprendizaje automático, así como realizar una comparación directa entre un entorno real y virtual. Para ello, se ha desarrollado un protocolo experimental que incluye entornos emocionales 360º, un museo real y una virtualización 3D altamente realista del mismo museo.
La tesis presenta novedosas contribuciones del uso de la RV inmersiva en la investigación del comportamiento humano, en particular en lo relativo al estudio de las emociones. Esta ayudará a aplicar metodologías a estímulos más realistas para evaluar entornos y situaciones de la vida diaria, superando las actuales limitaciones de la estimulación emocional que clásicamente ha incluido imágenes, audios o vídeos. Además, en ella se analiza la validez de la RV realizando una comparación directa usando una simulación altamente realista. Creemos que la RV inmersiva va a revolucionar los métodos de estimulación emocional en entornos de laboratorio. Además, su sinergia junto a las medidas fisiológicas y las técnicas de aprendizaje automático, impactarán transversalmente en muchas áreas de investigación como la arquitectura, la salud, la evaluación psicológica, el entrenamiento, la educación, la conducción o el marketing, abriendo un nuevo horizonte de oportunidades para la comunidad científica. La presente tesis espera contribuir a caminar en esa senda.[EN] In recent years the scientific community has significantly increased its use of virtual reality (VR) technologies in human behaviour research. In particular, the use of immersive VR has grown due to the introduction of affordable, high performance head mounted displays (HMDs). Among the fields that has strongly emerged in the last decade is affective computing, which combines psychophysiology, computer science, biomedical engineering and artificial intelligence in the development of systems that can automatically recognize emotions. The progress of affective computing is especially important in human behaviour research due to the central role that emotions play in many background processes, such as perception, decision-making, creativity, memory and social interaction.
Several studies have tried to develop a reliable methodology to evoke and automatically identify emotional states using objective physiological measures and machine learning methods. However, the majority of previous studies used images, audio or video to elicit emotional statements; to the best of our knowledge, no previous research has developed an emotion recognition system using immersive VR. Although some previous studies analysed physiological responses in immersive VR, they did not use machine learning techniques for biosignal processing and classification.
Moreover, a crucial concept when using VR for human behaviour research is validity: the capacity to evoke a response from the user in a simulated environment similar to the response that might be evoked in a physical environment. Although some previous studies have used psychological and cognitive dimensions to compare responses in real and virtual environments, few have extended this research to analyse physiological or behavioural responses. Moreover, to our knowledge, this is the first study to compare VR scenarios with their real-world equivalents using physiological measures coupled with machine learning algorithms, and to analyse the ability of VR to transfer and extrapolate insights obtained from VR environments to real environments.
The main objective of this thesis is, using psycho-physiological and behavioural responses in combination with machine learning methods, and by performing a direct comparison between a real and virtual environment, to validate immersive VR as an emotion elicitation tool. To do so we develop an experimental protocol involving emotional 360º environments, an art exhibition in a real museum, and a highly-realistic 3D virtualization of the same art exhibition.
This thesis provides novel contributions to the use of immersive VR in human behaviour research, particularly in relation to emotions. VR can help in the application of methodologies designed to present more realistic stimuli in the assessment of daily-life environments and situations, thus overcoming the current limitations of affective elicitation, which classically uses images, audio and video. Moreover, it analyses the validity of VR by performing a direct comparison using highly-realistic simulation. We believe that immersive VR will revolutionize laboratory-based emotion elicitation methods. Moreover, its synergy with physiological measurement and machine learning techniques will impact transversely in many other research areas, such as architecture, health, assessment, training, education, driving and marketing, and thus open new opportunities for the scientific community. The present dissertation aims to contribute to this progress.[CA] L'ús de la realitat virtual (RV) s'ha incrementat notablement en la comunitat científica per a la recerca del comportament humà. En particular, la RV immersiva ha crescut a causa de la democratització de les ulleres de realitat virtual o head mounted displays (HMD), que ofereixen un alt rendiment amb una reduïda inversió econòmica. Un dels camps que ha emergit amb força en l'última dècada és el Affective Computing, que combina psicofisiologia, informàtica, enginyeria biomèdica i intel·ligència artificial, desenvolupant sistemes que puguen reconéixer emocions automàticament. El seu progrés és especialment important en el camp de la recerca del comportament humà, a causa del paper fonamental que les emocions juguen en molts processos psicològics com la percepció, la presa de decisions, la creativitat, la memòria i la interacció social. Molts estudis s'han centrat en intentar obtenir una metodologia fiable per a evocar i automàticament identificar estats emocionals, utilitzant mesures fisiològiques objectives i mètodes d'aprenentatge automàtic. No obstant això, la major part dels estudis previs utilitzen imatges, àudios o vídeos per a generar els estats emocionals i, fins on arriba el nostre coneixement, cap d'ells ha desenvolupat un sistema de reconeixement emocional mitjançant l'ús de la RV immersiva. Encara que alguns treballs anteriors sí que analitzen les respostes fisiològiques en RV immersives, aquests no presenten models d'aprenentatge automàtic per a processament i classificació automàtica de biosenyals. A més, un concepte crucial quan s'utilitza la RV en la recerca del comportament humà és la validesa: la capacitat d'evocar respostes similars en un entorn virtual a les evocades per l'espai físic. Encara que alguns estudis previs han utilitzat dimensions psicològiques i cognitives per a comparar respostes entre entorns reals i virtuals, les recerques que analitzen respostes fisiològiques o comportamentals estan molt menys esteses. Segons els nostres coneixements, aquest és el primer treball que compara entorns físics amb la seua rèplica en RV, emprant respostes fisiològiques i algorismes d'aprenentatge automàtic i analitzant la capacitat de la RV de transferir i extrapolar les conclusions obtingudes a l'entorn real que s'està simulant. L'objectiu principal de la tesi és validar l'ús de la RV immersiva com una eina d'estimulació emocional usant respostes psicofisiològiques i comportamentals en combinació amb algorismes d'aprenentatge automàtic, així com realitzar una comparació directa entre un entorn real i virtual. Per a això, s'ha desenvolupat un protocol experimental que inclou entorns emocionals 360º, un museu real i una virtualització 3D altament realista del mateix museu. La tesi presenta noves contribucions de l'ús de la RV immersiva en la recerca del comportament humà, en particular quant a l'estudi de les emocions. Aquesta ajudarà a aplicar metodologies a estímuls més realistes per a avaluar entorns i situacions de la vida diària, superant les actuals limitacions de l'estimulació emocional que clàssicament ha inclòs imatges, àudios o vídeos. A més, en ella s'analitza la validesa de la RV realitzant una comparació directa usant una simulació altament realista. Creiem que la RV immersiva revolucionarà els mètodes d'estimulació emocional en entorns de laboratori. A més, la seua sinergia al costat de les mesures fisiològiques i les tècniques d'aprenentatge automàtic, impactaran transversalment en moltes àrees de recerca com l'arquitectura, la salut, l'avaluació psicològica, l'entrenament, l'educació, la conducció o el màrqueting, obrint un nou horitzó d'oportunitats per a la comunitat científica. La present tesi espera contribuir a caminar en aquesta senda.Marín Morales, J. (2020). Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/148717TESISCompendi
Product Validation in Creative Processes: A Gender Perspective in Industrial Design Projects
[EN] Design education and practice are continuously evolving. Educational institutions must include intellectual complexities and new curriculum to support good design education. The design education future emerges multidisciplinary knowledge, teaching innovation and employment necessities. This paper describes a methodology centered in product validation with industrial design students. Focusing on discovering the student experience during the project execution, in addition to observing closely the female design student's perception on the methodology and process developed. The academic project was the design of a novel tool board. The students developed the proposed project in a period of eight weeks. Sixteen students participated as a sample of this research. The methodology consisted of eight phases that spanned from project brief to project conclusion, introducing two phases focused on validation exercises for the elements created to reach the solution of the tool board. During the end of the two evaluation phases, two surveys were applied asking for information on his previous experience during his design education and three elements that assessment the design methodology implementation: utility, novelty, and relevance. Using multiple choice and Likert scale answers the students answered the surveys. The survey's findings revealed relevant information on the project implementation focused on evaluation phases during the product design. The results revealed how students reflected on their previous experience developing projects, and how the design tool board integrate important phases like validation. Also, the students evaluated with a positive value the utility, novelty, and relevance of the developed project. However, the most important finding was the female perception comparing male students. The female assessment of novelty and relevance increased during project implementation, highlighting novelty as a perceived element to a greater range than men. This research results allowed us to discover more information about female students experience with creative and validation processes.The authors would also like to acknowledge the financial
support provided by the NOVUS grant ID: N20-158-41
(Validación científica como herramienta educativa en
proyectos de carácter creativo), as well as the support of the
Writing Lab and TecLabs at Tecnologico de Monterrey,
Mexico, throughout the production of this work.Rojas, J.; Higuera-Trujillo, JL.; Muniz, G.; Marín-Morales, J. (2021). Product Validation in Creative Processes: A Gender Perspective in Industrial Design Projects. IEEE. 760-765. https://doi.org/10.1109/EDUCON46332.2021.9453948S76076
Development and Calibration of an Eye-Tracking Fixation Identification Algorithm for Immersive Virtual Reality
[EN] Fixation identification is an essential task in the extraction of relevant information from gaze patterns; various algorithms are used in the identification process. However, the thresholds used in the algorithms greatly affect their sensitivity. Moreover, the application of these algorithm to eye-tracking technologies integrated into head-mounted displays, where the subject's head position is unrestricted, is still an open issue. Therefore, the adaptation of eye-tracking algorithms and their thresholds to immersive virtual reality frameworks needs to be validated. This study presents the development of a dispersion-threshold identification algorithm applied to data obtained from an eye-tracking system integrated into a head-mounted display. Rules-based criteria are proposed to calibrate the thresholds of the algorithm through different features, such as number of fixations and the percentage of points which belong to a fixation. The results show that distance-dispersion thresholds between 1-1.6 degrees and time windows between0.25-0.4s are the acceptable range parameters, with 1 degrees and0.25s being the optimum. The work presents a calibrated algorithm to be applied in future experiments with eye-tracking integrated into head-mounted displays and guidelines for calibrating fixation identification algorithmsWe thank Pepe Roda Belles for the development of the virtual reality environment and the integration of the HMD with Unity platform. We also thank Masoud Moghaddasi for useful discussions and recommendations.Llanes-Jurado, J.; Marín-Morales, J.; Guixeres Provinciale, J.; Alcañiz Raya, ML. (2020). Development and Calibration of an Eye-Tracking Fixation Identification Algorithm for Immersive Virtual Reality. Sensors. 20(17):1-15. https://doi.org/10.3390/s20174956S1152017Cipresso, P., Giglioli, I. A. C., Raya, M. A., & Riva, G. (2018). The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature. Frontiers in Psychology, 9. doi:10.3389/fpsyg.2018.02086Chicchi Giglioli, I. A., Pravettoni, G., Sutil Martín, D. L., Parra, E., & Raya, M. A. (2017). A Novel Integrating Virtual Reality Approach for the Assessment of the Attachment Behavioral System. Frontiers in Psychology, 8. doi:10.3389/fpsyg.2017.00959Marín-Morales, J., Higuera-Trujillo, J. L., De-Juan-Ripoll, C., Llinares, C., Guixeres, J., Iñarra, S., & Alcañiz, M. (2019). Navigation Comparison between a Real and a Virtual Museum: Time-dependent Differences using a Head Mounted Display. Interacting with Computers, 31(2), 208-220. doi:10.1093/iwc/iwz018Kober, S. E., Kurzmann, J., & Neuper, C. (2012). Cortical correlate of spatial presence in 2D and 3D interactive virtual reality: An EEG study. International Journal of Psychophysiology, 83(3), 365-374. doi:10.1016/j.ijpsycho.2011.12.003Borrego, A., Latorre, J., Llorens, R., Alcañiz, M., & Noé, E. (2016). Feasibility of a walking virtual reality system for rehabilitation: objective and subjective parameters. Journal of NeuroEngineering and Rehabilitation, 13(1). doi:10.1186/s12984-016-0174-1Clemente, M., Rodríguez, A., Rey, B., & Alcañiz, M. (2014). Assessment of the influence of navigation control and screen size on the sense of presence in virtual reality using EEG. Expert Systems with Applications, 41(4), 1584-1592. doi:10.1016/j.eswa.2013.08.055Borrego, A., Latorre, J., Alcañiz, M., & Llorens, R. (2018). Comparison of Oculus Rift and HTC Vive: Feasibility for Virtual Reality-Based Exploration, Navigation, Exergaming, and Rehabilitation. Games for Health Journal, 7(3), 151-156. doi:10.1089/g4h.2017.0114Jensen, L., & Konradsen, F. (2017). A review of the use of virtual reality head-mounted displays in education and training. Education and Information Technologies, 23(4), 1515-1529. doi:10.1007/s10639-017-9676-0Jost, T. A., Drewelow, G., Koziol, S., & Rylander, J. (2019). A quantitative method for evaluation of 6 degree of freedom virtual reality systems. Journal of Biomechanics, 97, 109379. doi:10.1016/j.jbiomech.2019.109379Chandrasekera, T., Fernando, K., & Puig, L. (2019). Effect of Degrees of Freedom on the Sense of Presence Generated by Virtual Reality (VR) Head-Mounted Display Systems: A Case Study on the Use of VR in Early Design Studios. Journal of Educational Technology Systems, 47(4), 513-522. doi:10.1177/0047239518824862Bălan, O., Moise, G., Moldoveanu, A., Leordeanu, M., & Moldoveanu, F. (2020). An Investigation of Various Machine and Deep Learning Techniques Applied in Automatic Fear Level Detection and Acrophobia Virtual Therapy. Sensors, 20(2), 496. doi:10.3390/s20020496Armstrong, T., & Olatunji, B. O. (2012). Eye tracking of attention in the affective disorders: A meta-analytic review and synthesis. Clinical Psychology Review, 32(8), 704-723. doi:10.1016/j.cpr.2012.09.004Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372-422. doi:10.1037/0033-2909.124.3.372Irwin, D. E. (1992). Memory for position and identity across eye movements. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(2), 307-317. doi:10.1037/0278-7393.18.2.307Tanriverdi, V., & Jacob, R. J. K. (2000). Interacting with eye movements in virtual environments. Proceedings of the SIGCHI conference on Human factors in computing systems - CHI ’00. doi:10.1145/332040.332443Skulmowski, A., Bunge, A., Kaspar, K., & Pipa, G. (2014). Forced-choice decision-making in modified trolley dilemma situations: a virtual reality and eye tracking study. Frontiers in Behavioral Neuroscience, 8. doi:10.3389/fnbeh.2014.00426Juvrud, J., Gredebäck, G., Åhs, F., Lerin, N., Nyström, P., Kastrati, G., & Rosén, J. (2018). The Immersive Virtual Reality Lab: Possibilities for Remote Experimental Manipulations of Autonomic Activity on a Large Scale. Frontiers in Neuroscience, 12. doi:10.3389/fnins.2018.00305Hessels, R. S., Niehorster, D. C., Nyström, M., Andersson, R., & Hooge, I. T. C. (2018). Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. Royal Society Open Science, 5(8), 180502. doi:10.1098/rsos.180502Diaz, G., Cooper, J., Kit, D., & Hayhoe, M. (2013). Real-time recording and classification of eye movements in an immersive virtual environment. Journal of Vision, 13(12), 5-5. doi:10.1167/13.12.5Duchowski, A. T., Medlin, E., Gramopadhye, A., Melloy, B., & Nair, S. (2001). Binocular eye tracking in VR for visual inspection training. Proceedings of the ACM symposium on Virtual reality software and technology - VRST ’01. doi:10.1145/505008.505010Lim, J. Z., Mountstephens, J., & Teo, J. (2020). Emotion Recognition Using Eye-Tracking: Taxonomy, Review and Current Challenges. Sensors, 20(8), 2384. doi:10.3390/s20082384Manor, B. R., & Gordon, E. (2003). Defining the temporal threshold for ocular fixation in free-viewing visuocognitive tasks. Journal of Neuroscience Methods, 128(1-2), 85-93. doi:10.1016/s0165-0270(03)00151-1Salvucci, D. D., & Goldberg, J. H. (2000). Identifying fixations and saccades in eye-tracking protocols. Proceedings of the symposium on Eye tracking research & applications - ETRA ’00. doi:10.1145/355017.355028Duchowski, A., Medlin, E., Cournia, N., Murphy, H., Gramopadhye, A., Nair, S., … Melloy, B. (2002). 3-D eye movement analysis. Behavior Research Methods, Instruments, & Computers, 34(4), 573-591. doi:10.3758/bf03195486Bobic, V., & Graovac, S. (2016). Development, implementation and evaluation of new eye tracking methodology. 2016 24th Telecommunications Forum (TELFOR). doi:10.1109/telfor.2016.7818800Sidenmark, L., & Lundström, A. (2019). Gaze behaviour on interacted objects during hand interaction in virtual reality for eye tracking calibration. Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. doi:10.1145/3314111.3319815Alghamdi, N., & Alhalabi, W. (2019). Fixation Detection with Ray-casting in Immersive Virtual Reality. International Journal of Advanced Computer Science and Applications, 10(7). doi:10.14569/ijacsa.2019.0100710Blignaut, P. (2009). Fixation identification: The optimum threshold for a dispersion algorithm. Attention, Perception, & Psychophysics, 71(4), 881-895. doi:10.3758/app.71.4.881Shic, F., Scassellati, B., & Chawarska, K. (2008). The incomplete fixation measure. Proceedings of the 2008 symposium on Eye tracking research & applications - ETRA ’08. doi:10.1145/1344471.1344500Vive Pro Eyehttps://www.vive.com/us
Bases metodológicas para una nueva plataforma de medida del comportamiento humano en entornos virtuales
[ES] Para evaluar la funcionalidad y el rendimiento de un
espacio se analiza el comportamiento de sus usuarios. Este
se ha medido tradicionalmente a partir de encuestas y
observación, con las limitaciones de tratarse de
valoraciones subjetivas, influenciadas por el entrevistador
y/o observador, y, en el caso de la observación, evaluar el
espacio a posteriori, una vez ejecutado el proyecto. Hoy en
día, la realidad virtual solventa estos problemas, al ser
capaz de representar escenarios de forma realista,
inmersiva e interactiva, permitiendo analizar con un bajo
coste el comportamiento de los usuarios antes de que se
ejecuten los proyectos, en un entorno controlado.
El presente artículo presenta las bases metodológicas para
una nueva plataforma de medida del comportamiento
humano en entornos virtuales, que ayudará en la toma de
decisiones a través de la pre-evaluación de los espacios
antes de ser ejecutados. Se define una metodología
aplicable con la tecnología actual, a partir de la cual se
obtendrán métricas con las que optimizar la funcionalidad
y el rendimiento de espacios de futura construcción o
remodelación de los ya existentes. La herramienta es
transversal ya que puede aplicarse a cualquier proyecto
que tenga como elemento fundamental el tránsito de
personas, ya sean espacios comerciales, culturales,
dotacionales o de ocio, y se presentan diferentes ejemplos
de aplicación práctica.[EN] Human behavior is analyzed to evaluate the functionality and efficiency of a public space. It was classically measured from surveys and observation, however, those measurements have some limitations. Firstly, they are subjective valuations and are influenced by the interviewer and/or the observer. In addition, the observation oblige us to make that evaluation subsequently, when the project has been executed. Nowadays, virtual reality resolves those problems as a result of its capacity to represent scenarios on a realistic, immersive and interactive way. It allows to analyze human behaviour before the execution of projects at a low cost and controlled way.
This article presents the methodological bases for a new platform for measuring human behaviour in virtual environments. It will assist in the decision-making process through the pre-evaluation of different spaces before being executed. An applicable methodology were explained from which metrics are created and it allows to optimize functionality and efficiency of a new construction or remodeling. This is a cross-wise platform and can be applied to any project where the human transit is a central element: commercial, cultural, dotacional or leisure spaces. Different applied examples in study were presented.The present research has been financed by the Ministry of Economy and Competitiveness. Spain (project TIN2013-45736-R).Marín-Morales, J.; Torrecilla-Moreno, C.; Guixeres Provinciale, J.; Llinares Millán, MDC. (2017). Methodological bases for a new platform for the measurement of human behaviour in virtual environments. DYNA: Ingeniería e Industria. 92(1):34-38. https://doi.org/10.6036/7963S343892
Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing
[EN] Emotions play a critical role in our daily lives, so the understanding and recognition of emotional responses is crucial for human research. Affective computing research has mostly used non-immersive two-dimensional (2D) images or videos to elicit emotional states. However, immersive virtual reality, which allows researchers to simulate environments in controlled laboratory conditions with high levels of sense of presence and interactivity, is becoming more popular in emotion research. Moreover, its synergy with implicit measurements and machine-learning techniques has the potential to impact transversely in many research areas, opening new opportunities for the scientific community. This paper presents a systematic review of the emotion recognition research undertaken with physiological and behavioural measures using head-mounted displays as elicitation devices. The results highlight the evolution of the field, give a clear perspective using aggregated analysis, reveal the current open issues and provide guidelines for future research.This research was funded by European Commission, grant number H2020-825585 HELIOS.Marín-Morales, J.; Llinares Millán, MDC.; Guixeres Provinciale, J.; Alcañiz Raya, ML. (2020). Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing. Sensors. 20(18):1-26. https://doi.org/10.3390/s20185163S126201
The background music-content congruence of TV advertisements: A neurophysiological study
[EN] Music affects viewers¿ responses to advertisements. In this study we present the findings of an experiment that investigates the emotional and cognitive reactions of subjects¿ brains during exposure to television advertisements with music congruent, and incongruent, with the advertisement content. We analyze the electroencephalography signals and eye-tracking behaviors of a group of 90 women watching six TV advertisements. The study's findings suggested that incongruent music generates higher levels of attention and advertisement recall. On the other hand, frontal asymmetry measured through electroencephalography was shown to be higher with congruent music. Similarly, cognitive workload was higher when the music was incongruent with the advertisement content. No significant differences were found in terms of advertisement likeability based on incongruent versus congruent music. The results demonstrated the validity of neurophysiological techniques for assessing the effects of levels of music congruence in advertisements.Ausin, JM.; Bigné, E.; Marín-Morales, J.; Guixeres Provinciale, J.; Alcañiz Raya, ML. (2021). The background music-content congruence of TV advertisements: A neurophysiological study. European Research on Management and Business Economics. 27(2):1-14. https://doi.org/10.1016/j.iedeen.2021.100154S11427
Alzheimer Disease Classification through ASR-based Transcriptions: Exploring the Impact of Punctuation and Pauses
Alzheimer's Disease (AD) is the world's leading neurodegenerative disease,
which often results in communication difficulties. Analysing speech can serve
as a diagnostic tool for identifying the condition. The recent ADReSS challenge
provided a dataset for AD classification and highlighted the utility of manual
transcriptions. In this study, we used the new state-of-the-art Automatic
Speech Recognition (ASR) model Whisper to obtain the transcriptions, which also
include automatic punctuation. The classification models achieved test accuracy
scores of 0.854 and 0.833 combining the pretrained FastText word embeddings and
recurrent neural networks on manual and ASR transcripts respectively.
Additionally, we explored the influence of including pause information and
punctuation in the transcriptions. We found that punctuation only yielded minor
improvements in some cases, whereas pause encoding aided AD classification for
both manual and ASR transcriptions across all approaches investigated
Validación de un simulador de conducción de bajo coste para el diseño de carreteras convencionales
[ES] La cantidad de estudios de seguridad vial basados en simuladores de conducción está en continuo crecimiento. En este sentido, la Universitat Politècnica de València (UPV) ha desarrollado un simulador de conducción de bajo coste: SE2RCO (Simulador para la Evaluación, Entrenamiento y Rehabilitación de Conductores). El principal objetivo de la investigación es la validación de este simulador, con el fin de desarrollar estudios relacionados con la seguridad vial y el diseño geométrico de carreteras incorporando el factor humano. Dicha validación se ha realizado a partir de la observación en campo de los perfiles continuos de velocidad desarrollados por 28 voluntarios conduciendo su propio vehículo en un tramo de carretera convencional de 30 km. Los mismos voluntarios condujeron posteriormente en el simulador de conducción ese mismo tramo de carretera reconstruido en un entorno virtual. Un total de 79 curvas y 52 rectas fueron objeto de análisis. La comparación entre las velocidades desarrolladas en la realidad y las observadas durante la simulación permitieron llevar a cabo la validez objetiva del simulador de conducción. Los resultados mostraron que la velocidad media en el simulador y en la realidad era similar cuando la velocidad simulada era inferior a 87.3 km/h. En caso de ser superior, la velocidad media en la realidad era menor que en el simulador. En cuanto a la velocidad de operación, se observó que la velocidad real era aproximadamente 5 km/h menor que la simulada. Finalmente, estos resultados estuvieron apoyados por la percepción de los conductores, ya que la mayoría de ellos evaluaron la calidad del entorno simulado y el grado de similitud entre la tarea de conducción real y simulada como medio o alto, consiguiendo de esta manera la validez subjetiva del simulador de conducción.Los autores quisieran agradecer a la Universitat Politècnica de València (UPV), que financió
el proyecto de investigación “CONSIM - Desarrollo de un Modelo para la Evaluación de la
Consistencia del Diseño Geométrico de Carreteras Convencionales mediante Simuladores
de Conducción” (PAID 05-2012). Asimismo, agradecer también al Ministerio de Economía
y Competitividad y al Fondo Social Europeo, que financiaron el proyecto de investigación
“CASEFU - Estudio experimental de la funcionalidad y seguridad de las carreteras
convencionales” (TRA2013-42578-P), del cual forma parte este estudio.Llopis-Castelló, D.; Camacho Torregrosa, FJ.; Pérez Zuriaga, AM.; Marín-Morales, J.; García García, A.; Dols Ruiz, JF. (2016). Validación de un simulador de conducción de bajo coste para el diseño de carreteras convencionales. En XII Congreso de ingeniería del transporte. 7, 8 y 9 de Junio, Valencia (España). Editorial Universitat Politècnica de València. 1866-1879. https://doi.org/10.4995/CIT2016.2016.3444OCS1866187
Automatic artifact recognition and correction for electrodermal activity based on LSTM-CNN models
[EN] Researchers increasingly use electrodermal activity (EDA) to assess emotional states, developing novel appli-cations that include disorder recognition, adaptive therapy, and mental health monitoring systems. However, movement can produce major artifacts that affect EDA signals, especially in uncontrolled environments where users can freely walk and move their hands. This work develops a fully automatic pipeline for recognizing and correcting motion EDA artifacts, exploring the suitability of long short-term memory (LSTM) and convolutional neural networks (CNN). First, we constructed the EDABE dataset, collecting 74h EDA signals from 43 subjects collected during an immersive virtual reality task and manually corrected by two experts to provide a ground truth. The LSTM-1D CNN model produces the best performance recognizing 72% of artifacts with 88% accuracy, outperforming two state-of-the-art methods in sensitivity, AUC and kappa, in the test set. Subsequently, we developed a polynomial regression model to correct the detected artifacts automatically. Evaluation of the complete pipeline demonstrates that the automatically and manually corrected signals do not present differences in the phasic components, supporting their use in place of expert manual correction. In addition, the EDABE dataset represents the first public benchmark to compare the performance of EDA correction models. This work provides a pipeline to automatically correct EDA artifacts that can be used in uncontrolled conditions. This tool will allow to development of intelligent devices that recognize human emotional states without human intervention.This work was supported by the European Commission [RHUMBO H2020-MSCA-ITN-2018-813234] ; the Generalitat Valenciana, Spain [REBRAND PROMETEU/2019/105] ; the MCIN/AEI, Spain [PID2021-127946OB-I00] ; and the Universitat Politecnica de Valencia, Spain [PAID-10-20].Llanes-Jurado, J.; Lucia A. Carrasco-Ribelles; Alcañiz Raya, ML.; Soria-Olivas, E.; Marín-Morales, J. (2023). Automatic artifact recognition and correction for electrodermal activity based on LSTM-CNN models. Expert Systems with Applications. 230. https://doi.org/10.1016/j.eswa.2023.12058123
An Online Attachment Style Recognition System Based on Voice and Machine Learning
[EN] Attachment styles are known to have significant associations with mental and physical health. Specifically, insecure attachment leads individuals to higher risk of suffering from mental disorders and chronic diseases. The aim of this study is to develop an attachment recognition model that can distinguish between secure and insecure attachment styles from voice recordings, exploring the importance of acoustic features while also evaluating gender differences. A total of 199 participants recorded their responses to four open questions intended to trigger their attachment system using a web-based interrogation system. The recordings were processed to obtain the standard acoustic feature set eGeMAPS, and recursive feature elimination was applied to select the relevant features. Different supervised machine learning models were trained to recognize attachment styles using both gender-dependent and gender-independent approaches. The gender-independent model achieved a test accuracy of 58.88%, whereas the gender-dependent models obtained 63.88% and 83.63% test accuracy for women and men respectively, indicating a strong influence of gender on attachment style recognition and the need to consider them separately in further studies. These results also demonstrate the potential of acoustic properties for remote assessment of attachment style, enabling fast and objective identification of this health risk factor, and thus supporting the implementation of large-scale mobile screening systems.This work was supported in part by the Generalitat Valenciana under Grant ACIF/2021/187, and its funded Project Mixed reality and brain decision -REBRAND under Grant PROMETEO/2019/105, and in part by the Universitat Politecnica de Valencia under Grants PAID-10-20 and PAID-PD-22.Gómez-Zaragozá, L.; Marín-Morales, J.; Parra Vargas, E.; Chicchi Giglioli, IA.; Alcañiz Raya, ML. (2023). An Online Attachment Style Recognition System Based on Voice and Machine Learning. IEEE Journal of Biomedical and Health Informatics. 27(11):5576-5587. https://doi.org/10.1109/JBHI.2023.330436955765587271
- …