8,774 research outputs found

    Towards a comprehensive 3D dynamic facial expression database

    Get PDF
    Human faces play an important role in everyday life, including the expression of person identity, emotion and intentionality, along with a range of biological functions. The human face has also become the subject of considerable research effort, and there has been a shift towards understanding it using stimuli of increasingly more realistic formats. In the current work, we outline progress made in the production of a database of facial expressions in arguably the most realistic format, 3D dynamic. A suitable architecture for capturing such 3D dynamic image sequences is described and then used to record seven expressions (fear, disgust, anger, happiness, surprise, sadness and pain) by 10 actors at 3 levels of intensity (mild, normal and extreme). We also present details of a psychological experiment that was used to formally evaluate the accuracy of the expressions in a 2D dynamic format. The result is an initial, validated database for researchers and practitioners. The goal is to scale up the work with more actors and expression types

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Py-Feat: Python Facial Expression Analysis Toolbox

    Full text link
    Studying facial expressions is a notoriously difficult endeavor. Recent advances in the field of affective computing have yielded impressive progress in automatically detecting facial expressions from pictures and videos. However, much of this work has yet to be widely disseminated in social science domains such as psychology. Current state of the art models require considerable domain expertise that is not traditionally incorporated into social science training programs. Furthermore, there is a notable absence of user-friendly and open-source software that provides a comprehensive set of tools and functions that support facial expression research. In this paper, we introduce Py-Feat, an open-source Python toolbox that provides support for detecting, preprocessing, analyzing, and visualizing facial expression data. Py-Feat makes it easy for domain experts to disseminate and benchmark computer vision models and also for end users to quickly process, analyze, and visualize face expression data. We hope this platform will facilitate increased use of facial expression data in human behavior research.Comment: 25 pages, 3 figures, 5 table

    Towards affective computing that works for everyone

    Full text link
    Missing diversity, equity, and inclusion elements in affective computing datasets directly affect the accuracy and fairness of emotion recognition algorithms across different groups. A literature review reveals how affective computing systems may work differently for different groups due to, for instance, mental health conditions impacting facial expressions and speech or age-related changes in facial appearance and health. Our work analyzes existing affective computing datasets and highlights a disconcerting lack of diversity in current affective computing datasets regarding race, sex/gender, age, and (mental) health representation. By emphasizing the need for more inclusive sampling strategies and standardized documentation of demographic factors in datasets, this paper provides recommendations and calls for greater attention to inclusivity and consideration of societal consequences in affective computing research to promote ethical and accurate outcomes in this emerging field.Comment: 8 pages, 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII

    Individual Differences in Performance Speed Are Associated With a Positivity/Negativity Bias. An ERP and Behavioral Study

    Get PDF
    There is a current dispute over the origins, incidence, and development of Positivity Bias, i.e., preferential processing of positive relative to negative information. We addressed this question using a multi-method technique of behavioral, psychometric and event-related potential (ERP) measures in a lexical decision task (LDT). Twenty-four university students (11 female) participated (age range 18–26), but four were omitted owing to data issues. Participants were classified as Positivity Biased (PB) if their LDT responses to positive words were faster than negative words, and vice versa for those classified as Negativity Biased (NB), leading to a group of 11 PB participants and a group of 9 NB participants. Interestingly, the PB group was significantly faster overall than the NB group and had significantly shorter P2 component ERP latencies in the left occipital region. Furthermore, the PB group had significantly higher scores for expressive suppression (ES), together with higher scores for Crystallized Knowledge and for cognitive reappraisal (CR). These results suggest that around 55% of the students had Positivity Bias, and these were more efficient in processing information and had better emotion regulation abilities than those with a Negativity Bias

    Sensing technologies and machine learning methods for emotion recognition in autism: Systematic review

    Get PDF
    Background: Human Emotion Recognition (HER) has been a popular field of study in the past years. Despite the great progresses made so far, relatively little attention has been paid to the use of HER in autism. People with autism are known to face problems with daily social communication and the prototypical interpretation of emotional responses, which are most frequently exerted via facial expressions. This poses significant practical challenges to the application of regular HER systems, which are normally developed for and by neurotypical people. Objective: This study reviews the literature on the use of HER systems in autism, particularly with respect to sensing technologies and machine learning methods, as to identify existing barriers and possible future directions. Methods: We conducted a systematic review of articles published between January 2011 and June 2023 according to the 2020 PRISMA guidelines. Manuscripts were identified through searching Web of Science and Scopus databases. Manuscripts were included when related to emotion recognition, used sensors and machine learning techniques, and involved children with autism, young, or adults. Results: The search yielded 346 articles. A total of 65 publications met the eligibility criteria and were included in the review. Conclusions: Studies predominantly used facial expression techniques as the emotion recognition method. Consequently, video cameras were the most widely used devices across studies, although a growing trend in the use of physiological sensors was observed lately. Happiness, sadness, anger, fear, disgust, and surprise were most frequently addressed. Classical supervised machine learning techniques were primarily used at the expense of unsupervised approaches or more recent deep learning models. Studies focused on autism in a broad sense but limited efforts have been directed towards more specific disorders of the spectrum. Privacy or security issues were seldom addressed, and if so, at a rather insufficient level of detail.This research has been partially funded by the Spanish project “Advanced Computing Architectures and Machine Learning-Based Solutions for Complex Problems in Bioinformatics, Biotechnology, and Biomedicine (RTI2018-101674-B-I00)” and the Andalusian project “Integration of heterogeneous biomedical information sources by means of high performance computing. Application to personalized and precision medicine (P20_00163)”. Funding for this research is provided by the EU Horizon 2020 Pharaon project ‘Pilots for Healthy and Active Ageing’ (no. 857188). Moreover, this research has received funding under the REMIND project Marie Sklodowska-Curie EU Framework for Research and Innovation Horizon 2020 (no. 734355). This research has been partially funded by the BALLADEER project (PROMETEO/2021/088) from the Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana. Furthermore, it has been partially funded by the AETHER-UA (PID2020-112540RB-C43) project from the Spanish Ministry of Science and Innovation. This work has been also partially funded by “La Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital”, under the project “Development of an architecture based on machine learning and data mining techniques for the prediction of indicators in the diagnosis and intervention of autism spectrum disorder. AICO/2020/117”. This study was also funded by the Colombian Government through Minciencias grant number 860 “international studies for doctorate”. This research has been partially funded by the Spanish Government by the project PID2021-127275OB-I00, FEDER “Una manera de hacer Europa”. Moreover, this contribution has been supported by the Spanish Institute of Health ISCIII through the DTS21-00047 project. Furthermore, this work was funded by COST Actions “HARMONISATION” (CA20122) and “A Comprehensive Network Against Brain Cancer” (Net4Brain - CA22103). Sandra Amador is granted by the Generalitat Valenciana and the European Social Fund (CIACIF/ 2022/233)

    A Framework for Students Profile Detection

    Get PDF
    Some of the biggest problems tackling Higher Education Institutions are students’ drop-out and academic disengagement. Physical or psychological disabilities, social-economic or academic marginalization, and emotional and affective problems, are some of the factors that can lead to it. This problematic is worsened by the shortage of educational resources, that can bridge the communication gap between the faculty staff and the affective needs of these students. This dissertation focus in the development of a framework, capable of collecting analytic data, from an array of emotions, affects and behaviours, acquired either by human observations, like a teacher in a classroom or a psychologist, or by electronic sensors and automatic analysis software, such as eye tracking devices, emotion detection through facial expression recognition software, automatic gait and posture detection, and others. The framework establishes the guidance to compile the gathered data in an ontology, to enable the extraction of patterns outliers via machine learning, which assist the profiling of students in critical situations, like disengagement, attention deficit, drop-out, and other sociological issues. Consequently, it is possible to set real-time alerts when these profiles conditions are detected, so that appropriate experts could verify the situation and employ effective procedures. The goal is that, by providing insightful real-time cognitive data and facilitating the profiling of the students’ problems, a faster personalized response to help the student is enabled, allowing academic performance improvements
    corecore