6 research outputs found

    iGrace – Emotional Computational Model for EmI Companion Robot.

    Get PDF
    Chapitre 4We will discuss in this chapter the research in the field of emotional interaction, to maintain a non-verbal interaction with children from 4 to 8 years. This work fits into the EmotiRob project, whose goal is to comfort the children vulnerable and / or in hospitalization with an emotional robot companion. The use of robots in hospitals is still limited; we decided to put forward simple robot architecture and therefore, the emotional expression. In this context, a robot too complex and too voluminous must be avoided. After a study of advanced research on perception and emotional synthesis, it was important to determine the most appropriate way to express emotions in order to have a recognition rate acceptable to our target. Following an experiment on this subject, we were able to determine the degrees of freedom needed for the robot to express the six primary emotions. The second step was the definition and description of our emotional model. In order to have a wide range of expressions, while respecting the number of degrees of freedom, we use the concepts of emotional experiences. They provide us with almost two hundred different behaviors for the model. However we decide as a first step to limit ourselves to only fifty behaviors. This diversification is possible thanks to a mix of emotions linked to the dynamics of emotions. This theoretical model now established, we have started various experiments on a variety of audiences in order to validate the first time in its relevance and the rate of recognition of emotions. The first experiment was performed using a simulator for the capture of speech and the emotional and behavioral synthesis of the robot. This, validates the model assumptions that will be integrated EMI - Emotional Model of Interaction. Future phases of the project will evaluate the robot, both in its expression than in providing comfort to children. We describe the protocols used and present the results for EMI. These experiments will allow us to adjust and adapt the model. We will finish this chapter with a brief description of the robot's architecture, and the improvements to be made for the second version of EMI

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    Analyse du contenu expressif des gestes corporels

    Get PDF
    Nowadays, researches dealing with gesture analysis suffer from a lack of unified mathematical models. On the one hand, gesture formalizations by human sciences remain purely theoretical and are not inclined to any quantification. On the other hand, the commonly used motion descriptors are generally purely intuitive, and limited to the visual aspects of the gesture. In the present work, we retain Laban Movement Analysis (LMA – originally designed for the study of dance movements) as a framework for building our own gesture descriptors, based on expressivity. Two datasets are introduced: the first one is called ORCHESTRE-3D, and is composed of pre-segmented orchestra conductors’ gestures, which have been annotated with the help of lexicon of musical emotions. The second one, HTI 2014-2015, comprises sequences of multiple daily actions. In a first experiment, we define a global feature vector based upon the expressive indices of our model and dedicated to the characterization of the whole gesture. This descriptor is used for action recognition purpose and to discriminate the different emotions of our orchestra conductors’ dataset. In a second approach, the different elements of our expressive model are used as a frame descriptor (e.g., describing the gesture at a given time). The feature space provided by such local characteristics is used to extract key poses of the motion. With the help of such poses, we obtain a per-frame sub-representation of body motions which is available for real-time action recognition purposeAujourd’hui, les recherches portant sur le geste manquent de modèles génériques. Les spécialistes du geste doivent osciller entre une formalisation excessivement conceptuelle et une description purement visuelle du mouvement. Nous reprenons les concepts développés par le chorégraphe Rudolf Laban pour l’analyse de la danse classique contemporaine, et proposons leur extension afin d’élaborer un modèle générique du geste basé sur ses éléments expressifs. Nous présentons également deux corpus de gestes 3D que nous avons constitués. Le premier, ORCHESTRE-3D, se compose de gestes pré-segmentés de chefs d’orchestre enregistrés en répétition. Son annotation à l’aide d’émotions musicales est destinée à l’étude du contenu émotionnel de la direction musicale. Le deuxième corpus, HTI 2014-2015, propose des séquences d’actions variées de la vie quotidienne. Dans une première approche de reconnaissance dite « globale », nous définissons un descripteur qui se rapporte à l’entièreté du geste. Ce type de caractérisation nous permet de discriminer diverses actions, ainsi que de reconnaître les différentes émotions musicales que portent les gestes des chefs d’orchestre de notre base ORCHESTRE-3D. Dans une seconde approche dite « dynamique », nous définissons un descripteur de trame gestuelle (e.g. défini pour tout instant du geste). Les descripteurs de trame sont utilisés des poses-clés du mouvement, de sorte à en obtenir à tout instant une représentation simplifiée et utilisable pour reconnaître des actions à la volée. Nous testons notre approche sur plusieurs bases de geste, dont notre propre corpus HTI 2014-201
    corecore