56 research outputs found

    Emotion elicitation and capture among real couples in the lab

    Get PDF
    Couples’ relationships affect partners’ mental and physical well-being. Automatic recognition of couples’ emotions will not only help to better understand the interplay of emotions, intimate relationships, and health and well-being, but also provide crucial clinical insights into protective and risk factors of relationships, and can ultimately guide interventions. However, several works developing emotion recognition algorithms use data from actors in artificial dyadic interactions and the algorithms are likely not to perform well on real couples. We are developing emotion recognition methods using data from real couples and, in this paper, we describe two studies we ran in which we collected emotion data from real couples — Dutch-speaking couples in Belgium and German-speaking couples in Switzerland. We discuss our approach to eliciting and capturing emotions and make five recommendations based on their relevance for developing well-performing emotion recognition systems for couples

    IoT aplicado a sistemas de riego en agricultura: Un análisis de usabilidad

    Get PDF
    The Internet of Things favors using technological tools in rural environments thanks to the ability to connect to the Internet between devices that facilitate daily tasks. The research aims to evaluate the usability of the decision support system for irrigation in agriculture, AgroRIEGO, through the development of an IoT-based device. The sponsors of this project were the Ministry of Information and Communication Technologies and the Center of Excellence in the Internet of Things Appropriation (CEA-IoT) in Colombia. Among the methods used is the use of the heuristic evaluation technique, structured into 15 categories and 62 subcategories of assessment. This analysis was complemented by the contribution of a group of experts in the design and development of IoT applications and devices and agriculture to assess the system's attributes.El Internet de las Cosas favorece el aprovechamiento de las herramientas tecnológicas en ambientes rurales, gracias a la capacidad de conexión a Internet entre dispositivos que facilita el quehacer diario. El objetivo de la investigación es evaluar la usabilidad del sistema de soporte para la toma de decisiones de riego en el agro, AgroRIEGO, que se tiene desde el desarrollo de una aplicación de un dispositivo basado en IoT. El patrocinador de este proyecto fue el Ministerio de Tecnologías de Información y Comunicación y el Centro de Excelencia de Apropiación en Internet de las Cosas (CEA-IoT) en Colombia. Dentro de los métodos usados se encuentra el uso de la técnica de evaluación heurística, estructurada en 15 categorías y 62 subcategorías de valoración. Este análisis se complementa con el aporte de un grupo de expertos en el diseño y desarrollo de aplicaciones y dispositivos IoT y el agro para valorar los atributos del sistema

    A Short Survey on Deep Learning for Multimodal Integration: Applications, Future Perspectives and Challenges

    Get PDF
    Deep learning has achieved state-of-the-art performances in several research applications nowadays: from computer vision to bioinformatics, from object detection to image generation. In the context of such newly developed deep-learning approaches, we can define the concept of multimodality. The objective of this research field is to implement methodologies which can use several modalities as input features to perform predictions. In this, there is a strong analogy with respect to what happens with human cognition, since we rely on several different senses to make decisions. In this article, we present a short survey on multimodal integration using deep-learning methods. In a first instance, we comprehensively review the concept of multimodality, describing it from a two-dimensional perspective. First, we provide, in fact, a taxonomical description of the multimodality concept. Secondly, we define the second multimodality dimension as the one describing the fusion approaches in multimodal deep learning. Eventually, we describe four applications of multimodal deep learning to the following fields of research: speech recognition, sentiment analysis, forensic applications and image processing

    Evaluation of Machine Learning Algorithms for Emotions Recognition using Electrocardiogram

    Get PDF
    In recent studies, researchers have focused on using various modalities to recognize emotions for different applications. A major challenge is identifying emotions correctly with only electrocardiograms (ECG) as the modality. The main objective is to reduce costs by using single-modality ECG signals to predict human emotional states. This paper presents an emotion recognition approach utilizing the heart rate variability features obtained from ECG with feature selection techniques (exhaustive feature selection (EFS) and Pearson’s correlation) to train the classification models. Seven machine learning (ML) models: multi-layer perceptrons (MLP), Support Vector Machine (SVM), Decision Tree (DT), Gradient Boosting Decision Tree (GBDT), Logistic Regression, Adaboost and Extra Tree classifier are used to classify emotional state. Two public datasets, DREAMER and SWELL are used for evaluation. The results show that no particular ML works best for all data. For DREAMER with EFS, the best models to predict valence, arousal, and dominance are Extra Tree (74.6%), MLP and DT (74.6%), and GBDT and DT (69.8%), respectively. Extra tree with Pearson’s correlation are the best method for the ECG SWELL dataset and provide 100% accuracy. The usage of Extra tree classifier and feature selection technique contributes to the improvement of the model accuracy. Moreover, the Friedman test proved that ET is as good as other classification models for predicting human emotional state and ranks highest. Doi: 10.28991/ESJ-2023-07-01-011 Full Text: PD

    Improving the accuracy of automatic facial expression recognition in speaking subjects with deep learning

    Get PDF
    When automatic facial expression recognition is applied to video sequences of speaking subjects, the recognition accuracy has been noted to be lower than with video sequences of still subjects. This effect known as the speaking effect arises during spontaneous conversations, and along with the affective expressions the speech articulation process influences facial configurations. In this work we question whether, aside from facial features, other cues relating to the articulation process would increase emotion recognition accuracy when added in input to a deep neural network model. We develop two neural networks that classify facial expressions in speaking subjects from the RAVDESS dataset, a spatio-temporal CNN and a GRU cell RNN. They are first trained on facial features only, and afterwards both on facial features and articulation related cues extracted from a model trained for lip reading, while varying the number of consecutive frames provided in input as well. We show that using DNNs the addition of features related to articulation increases classification accuracy up to 12%, the increase being greater with more consecutive frames provided in input to the model
    • …
    corecore