9,892 research outputs found

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Facial Expression Recognition

    Get PDF

    Schizotypy, Alexithymia and Affect as predictors of Facial Emotion Recognition Capability using static and dynamic images

    Get PDF
    The main purpose of the present study is to investigate the capacity of schizotypy and alexithymia traits, in combination with affectivity to predict facial emotion recognition capability in a sample of nonclinical adults. Consecutive healthy participants (N= 98) were investigated using the Toronto Alexithymia Scale-20 (TAS-20), the Oxford-Liverpool Inventory of Feelings and Experiences-Reduced Version (O-LIFE-R), and the Positive and NA Schedule (PANAS). A set of validated photographs (static images) and virtual faces (dynamic images) for presenting the basic emotions was used to assess emotion recognition. Pearson correlations were applied to investigate the relationship between the study variables; the amount of variance in emotion recognition capability predicted by OLIFE-R, TAS-20 and PANAS was calculated by using the linear regression model. Results showed that alexithymia was strongly associated with schizotypy and NA; furthermore, alexithymia and NA made a significant contribution to the prediction of emotion recognition capability. The predictive model was fitted for two types of presentations (photographs and virtual reality). The inclusion of virtual faces emerges as a response to the need to consider computer characters as new assessment and treatment material for research and therapy in psychologyEl objetivo principal del presente estudio es investigar la capacidad de predicción de los rasgos de esquizotípia y alexitímia, en combinación con la afectividad, de la habilidad de reconocimiento de emociones en una muestra de adultos sanos. Noventa y ocho pacientes sanos (N =98) fueron evaluados mediante la Escala de Alexitímia Toronto-20 (TAS-20), el Inventario de Sentimientos y Experiencias Oxford-Liverpool-Versión Reducida (O-LIFE-R), y la Escala de Afecto Positivo y Negativo (PANAS). Para la evaluación de la capacidad de reconocimiento de emociones a nivel facial, se utilizó un set validado de fotografías (imágenes estáticas) y caras en realidad virtual (imágenes dinámicas). Para el análisis correlacional de las variables de estudio se aplicó la prueba de correlación de Pearson; para el análisis de predicción de la capacidad de reconocimiento emocional se utilizó un modelo de regresión lineal en el que se incluyeron las variables derivadas de las escalas OLIFE-R, TAS-20 y PANAS. Los resultados mostraron la existencia de una relación significativa entre alexitímia, esquizotípia y afecto negativo; el modelo de regresión reveló una aportación significativa de la alexitímia y el afecto negativo en la predicción de los errores cometidos en la tarea de reconocimiento facial. El modelo predictivo propuesto fue válido para ambos tipos de presentación de las emociones (fotografías y caras virtuales). La inclusión de las caras virtuales surge como respuesta a la necesidad de considerar los personajes computarizados como nuevo material de evaluación y tratamiento para la investigación y psicoterapia en psicología

    Schizotypy, Alexithymia and Affect as predictors of Facial Emotion Recognition Capability using static and dynamic images

    Get PDF
    The main purpose of the present study is to investigate the capacity of schizotypy and alexithymia traits, in combination with affectivity to predict facial emotion recognition capability in a sample of nonclinical adults. Consecutive healthy participants (N= 98) were investigated using the Toronto Alexithymia Scale-20 (TAS-20), the Oxford-Liverpool Inventory of Feelings and Experiences-Reduced Version (O-LIFE-R), and the Positive and NA Schedule (PANAS). A set of validated photographs (static images) and virtual faces (dynamic images) for presenting the basic emotions was used to assess emotion recognition. Pearson correlations were applied to investigate the relationship between the study variables; the amount of variance in emotion recognition capability predicted by OLIFE-R, TAS-20 and PANAS was calculated by using the linear regression model. Results showed that alexithymia was strongly associated with schizotypy and NA; furthermore, alexithymia and NA made a significant contribution to the prediction of emotion recognition capability. The predictive model was fitted for two types of presentations (photographs and virtual reality). The inclusion of virtual faces emerges as a response to the need to consider computer characters as new assessment and treatment material for research and therapy in psycholog

    Schizotypy, Alexithymia and Affect as predictors of Facial Emotion Recognition Capability using static and dynamic images

    Get PDF
    The main purpose of the present study is to investigate the capacity of schizotypy and alexithymia traits, in combination with affectivity to predict facial emotion recognition capability in a sample of nonclinical adults. Consecutive healthy participants (N= 98) were investigated using the Toronto Alexithymia Scale-20 (TAS-20), the Oxford-Liverpool Inventory of Feelings and Experiences-Reduced Version (O-LIFE-R), and the Positive and NA Schedule (PANAS). A set of validated photographs (static images) and virtual faces (dynamic images) for presenting the basic emotions was used to assess emotion recognition. Pearson correlations were applied to investigate the relationship between the study variables; the amount of variance in emotion recognition capability predicted by OLIFE-R, TAS-20 and PANAS was calculated by using the linear regression model. Results showed that alexithymia was strongly associated with schizotypy and NA; furthermore, alexithymia and NA made a significant contribution to the prediction of emotion recognition capability. The predictive model was fitted for two types of presentations (photographs and virtual reality). The inclusion of virtual faces emerges as a response to the need to consider computer characters as new assessment and treatment material for research and therapy in psychologyEl objetivo principal del presente estudio es investigar la capacidad de predicción de los rasgos de esquizotípia y alexitímia, en combinación con la afectividad, de la habilidad de reconocimiento de emociones en una muestra de adultos sanos. Noventa y ocho pacientes sanos (N =98) fueron evaluados mediante la Escala de Alexitímia Toronto-20 (TAS-20), el Inventario de Sentimientos y Experiencias Oxford-Liverpool-Versión Reducida (O-LIFE-R), y la Escala de Afecto Positivo y Negativo (PANAS). Para la evaluación de la capacidad de reconocimiento de emociones a nivel facial, se utilizó un set validado de fotografías (imágenes estáticas) y caras en realidad virtual (imágenes dinámicas). Para el análisis correlacional de las variables de estudio se aplicó la prueba de correlación de Pearson; para el análisis de predicción de la capacidad de reconocimiento emocional se utilizó un modelo de regresión lineal en el que se incluyeron las variables derivadas de las escalas OLIFE-R, TAS-20 y PANAS. Los resultados mostraron la existencia de una relación significativa entre alexitímia, esquizotípia y afecto negativo; el modelo de regresión reveló una aportación significativa de la alexitímia y el afecto negativo en la predicción de los errores cometidos en la tarea de reconocimiento facial. El modelo predictivo propuesto fue válido para ambos tipos de presentación de las emociones (fotografías y caras virtuales). La inclusión de las caras virtuales surge como respuesta a la necesidad de considerar los personajes computarizados como nuevo material de evaluación y tratamiento para la investigación y psicoterapia en psicología

    CGAMES'2009

    Get PDF

    Virtual reality facial emotion recognition in social environments:An eye-tracking study

    Get PDF
    BACKGROUND: Virtual reality (VR) enables the administration of realistic and dynamic stimuli within a social context for the assessment and training of emotion recognition. We tested a novel VR emotion recognition task by comparing emotion recognition across a VR, video and photo task, investigating covariates of recognition and exploring visual attention in VR. METHODS: Healthy individuals (n = 100) completed three emotion recognition tasks; a photo, video and VR task. During the VR task, emotions of virtual characters (avatars) in a VR street environment were rated, and eye-tracking was recorded in VR. RESULTS: Recognition accuracy in VR (overall 75%) was comparable to the photo and video task. However, there were some differences; disgust and happiness had lower accuracy rates in VR, and better accuracy was achieved for surprise and anger in VR compared to the video task. Participants spent more time identifying disgust, fear and sadness than surprise and happiness. In general, attention was directed longer to the eye and nose areas than the mouth. DISCUSSION: Immersive VR tasks can be used for training and assessment of emotion recognition. VR enables easily controllable avatars within environments relevant for daily life. Validated emotional expressions and tasks will be of relevance for clinical applications

    Affective Medicine: a review of Affective Computing efforts in Medical Informatics

    Get PDF
    Background: Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as “computing that relates to, arises from, or deliberately influences emotions”. AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. Objectives: 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Methods: Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. Results: The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. Conclusions: A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field
    corecore