8 research outputs found

    Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task

    Get PDF
    Current approaches do not allow robots to execute a task and simultaneously convey emotions to users using their body motions. This paper explores the capabilities of the Jacobian null space of a humanoid robot to convey emotions. A task priority formulation has been implemented in a Pepper robot which allows the specification of a primary task (waving gesture, transportation of an object, etc.) and exploits the kinematic redundancy of the robot to convey emotions to humans as a lower priority task. The emotions, defined by Mehrabian as points in the pleasure–arousal–dominance space, generate intermediate motion features (jerkiness, activity and gaze) that carry the emotional information. A map from this features to the joints of the robot is presented. A user study has been conducted in which emotional motions have been shown to 30 participants. The results show that happiness and sadness are very well conveyed to the user, calm is moderately well conveyed, and fear is not well conveyed. An analysis on the dependencies between the motion features and the emotions perceived by the participants shows that activity correlates positively with arousal, jerkiness is not perceived by the user, and gaze conveys dominance when activity is low. The results indicate a strong influence of the most energetic motions of the emotional task and point out new directions for further research. Overall, the results show that the null space approach can be regarded as a promising mean to convey emotions as a lower priority task.Postprint (author's final draft

    Dynamic Facial Expression of Emotion Made Easy

    Full text link
    Facial emotion expression for virtual characters is used in a wide variety of areas. Often, the primary reason to use emotion expression is not to study emotion expression generation per se, but to use emotion expression in an application or research project. What is then needed is an easy to use and flexible, but also validated mechanism to do so. In this report we present such a mechanism. It enables developers to build virtual characters with dynamic affective facial expressions. The mechanism is based on Facial Action Coding. It is easy to implement, and code is available for download. To show the validity of the expressions generated with the mechanism we tested the recognition accuracy for 6 basic emotions (joy, anger, sadness, surprise, disgust, fear) and 4 blend emotions (enthusiastic, furious, frustrated, and evil). Additionally we investigated the effect of VC distance (z-coordinate), the effect of the VC's face morphology (male vs. female), the effect of a lateral versus a frontal presentation of the expression, and the effect of intensity of the expression. Participants (n=19, Western and Asian subjects) rated the intensity of each expression for each condition (within subject setup) in a non forced choice manner. All of the basic emotions were uniquely perceived as such. Further, the blends and confusion details of basic emotions are compatible with findings in psychology

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Impact of Human Likeness on Ethical Decision Making about Medical Dilemmas

    Get PDF
    Humans are often represented in computer interfaces as graphical characters. These characters, or embodied agents, are used to increase people’s comfort level and humanize the interaction. While the impact of these characters has been studied in various ways, their influence on the ability of humans to make decisions of ethical consequence has yet to be explored. Interface designers have to make decisions in the design process that greatly influence how people interact with a system. If a seemly insignificant design decision could have a significant impact on how a human reacts to the system, then that warrants exploration. This study presents online participants with an ethical dilemma delivered by a female conversational character, and explores the differences in decisions made based on the motion quality and human likeness of the character. In the five conditions, which vary in motion quality and human likeness, participants showed no significant difference in the ethical decision. However, the data indicated that male participants were significantly more likely to rule against the character when the motion quality was jerky or the visual appearance of the character was represented by a computer generated character instead of a real woman. These findings extend previous work on interpersonal judgment, indicating that a virtual person’s appearance can influence supposedly impartial ethical decisions

    D'émotion et de GRACE (vers un modèle computationnel unifié des émotions)

    Get PDF
    Les psychologues (comme A. Damasio, K. R. Scherer, P. Ekman) ont montré que l émotion est un élément essentiel dans la prise de décision, dans l évolution des capacités d apprentissage et de création, dans l interaction sociale. Il est donc naturel de s'intéresser à l'expression d'émotions dans le cadre de l'interaction homme-machine. Nous avons proposé dans un premier temps le modèle GRACE, modèle générique des émotions pour les applications computationnelles. Nous nous sommes basés en particulier sur la théorie psychologique de K. R. Scherer, qui cherche à produire une théorie des processus émotionnels qui soit modélisable et calculable. La pertinence de notre modèle a été vérifiée et validée via une comparaison avec différents modèles computationnels existants. Si le modèle GRACE est générique, nous nous sommes attachés à montrer qu il pouvait s instancier dans un contexte particulier, en l occurrence l interaction homme-robot utilisant la modalité musicale. Nous nous sommes intéressés pour cela d une part à la conception d un module d analyse du contenu émotionnel d une séquence musicale, d autre part à la conception de mouvements émotionnellement expressifs pour un robot mobile. Du point de vue de l analyse musicale, la contribution principale de la thèse porte sur la proposition d un ensemble réduit d indicateurs musicaux et sur la validation du module d analyse sur une base de données de grande taille conçue par un expert en musicologie. Du point de vue de la robotique, nous avons pu montrer expérimentalement qu un robot avec des capacités expressives très limitées (déplacements, mouvements de caméra) pouvait néanmoins exprimer de manière satisfaisante un ensemble réduit d émotions simples (joie, colère, tristesse, sérénité).Emotion, as psychologists argue (like A. Damasio, K. R. Scherer, P. Ekman), is an essential factor for human beings in making decision, learning, inventing things, and interacting with others. Based on this statement, researchers in Human-Machine Interaction have been interested in adding emotional abilities to their applications. With the same goal of studying emotional abilities, we propose, in our work, a model of emotions named GRACE, which helps in modelling emotions in computational applications. We based our model on the work of psychologist Klaus R. Scherer, who intensively searches to form a generic model of emotion applicable to computational domain (like informatics, robotics, etc.). We demonstrate the pertinence of our model by comparing it to other existing models of emotions in the field of informatics and robotics. In this thesis, we also worked on the instantiation of GRACE, in particular the components Cognitive Interpretation and Expression. These two components have been developed to be applied in the context of interacting with users using music. To develop Cognitive Interpretation, we worked on the extraction of emotional content in musical excerpts. Our contribution consists in proposing a reduced number of musical features to efficiently extract the emotional content in music, and in validating them via a learning system with a large database designed by a musicologist. For Expression, we have worked on the design of emotional moves of a mobile robot. Through very limited moves (moves in space, camera moves), we have shown that with dance-inspired motions, the robot could efficiently convey basic emotions (i.e. happiness, sadness, anger, serenity) to people.EVRY-Bib. électronique (912289901) / SudocSudocFranceF
    corecore