6 research outputs found

    ClassRoom VR-Motion Capture (CVR-MC): a VR game to improve corporal expression in secondary-school teachers

    Get PDF
    Trabajo de Fin de Grado en Desarrollo de Videojuegos y en Ingeniería del Software, Facultad de Informática UCM, Departamento de Ingeniería del Software e Inteligencia Artificial, Curso 2020/2021.Contar con docentes cualificados y experimentados al frente de las aulas es imprescindible en cualquier sociedad. Sin embargo, la realidad es que no todos los docentes han podido disfrutar de una preparación práctica que les permita gestionar situaciones con las que no están familiarizados. Un ejemplo de estas, son las situaciones conflictivas. La falta de entornos seguros que permitan estas prácticas hace que se cree un problema insostenible al que debemos dar solución cuanto antes. A la hora de afrontar un conflicto es realmente importante ser asertivo y mostrar corporalmente lo que queremos decir con nuestras palabras. La comunicación efectiva es una asignatura pendiente para la mayoría de docentes que comienzan su carrera profesional. Sin embargo, es algo que puede enseñarse y que mejora las habilidades de resolución de conflictos. En este trabajo, presentamos la implementación y evaluación de ClassRoom VR-Motion Capture, una herramienta que permite a los docentes noveles situarse al frente de una clase donde se producen situaciones disruptivas y reaccionar ante ellas de manera segura. Durante la ejecución se recogerán datos relativos al lenguaje no verbal como son: los cambios en la tonalidad de la voz, los gestos y posiciones, la distancia entre interlocutores y palabras clave. Con esta información estimaremos la emoción con la que se relaciona el lenguaje no verbal del usuario. Además, tras su actuación recibiría un feedback valorando las decisiones tomadas para resolver el conflicto y un análisis del lenguaje no verbal empleado junto con las emociones estimadas. Tras el desarrollo de la aplicación realizamos un experimento junto con 14 profesionales del sector de la educación de Barcelona. En este documento se describe la prueba realizada con ClassRoom VR-Motion Capture con la intención de responder a dos preguntas de investigación. ¿Es factible utilizar el sistema CVR-MC en la formación docente para contribuir al aprendizaje de competencias comunicativas de gestión de clima de aula? Por otra parte, ¿es posible capturar el lenguaje no verbal y las emociones que manifiestan los participantes durante la simulación? ¿Se corresponden con las que manifiestan en un entorno real? De esta prueba concluimos que es un entorno amigable, seguro y factible para la preparación de futuros docentes. Sin embargo, hacemos una reflexión sobre cómo nuestro lenguaje no verbal y por lo tanto, las emociones que transmitimos, no concuerdan en entornos reales y entornos virtuales.Nowadays having qualified and experienced teachers in school classrooms is considered to be of the highest priority in any society. Unlikely this is far from reality because most teachers confess, they don´t have received enough practical training to manage disruptive situations in the classroom. The lack of safe environments which allow these practices creates an unsustainable problem which must be solved as soon as possible. Facing conflict is usually a hard task and it is extremely important to be assertive and to show with your body what we want to say with our words. Effective communication seems to be an enormous issue for most teachers beginning their professional careers. Fortunately, it can be taught and also could improve conflict resolution skills. In this project, we show not just the implementation but also the evaluation of Classroom VR-Motion Capture, an application which allows new teachers to experience different disruptive situations in a classroom and react to them in a safe environment. During the simulation, data related to non-verbal language will be collected, such as: voice tone variations, gestures and positions, distance between interlocutors (proxemia) and keywords. As a result of this procedure, we will estimate the emotion related to the user’s non-verbal language. In addition, after their performance, they will receive feedback about the decisions made to resolve the conflict, an analysis of the non-verbal language used and the estimated emotions. Coming up next to developing the application, we carried out an experiment where took part 14 education professionals from Barcelona. This document describes the test carried out with ClassRoom VR-Motion Capture for answering two research questions. Firstly, is it possible to use CVR-MC system in teacher training to improve the communications skills for classroom climate management? Secondly, is it possible to capture non-verbal language and their relevant emotions which are expressed by the participants during the simulation? Do they match with those expressed in real environments? Thanks to this test we conclude that ClassRoom VR-Motion Capture is a friendly, safe and feasible environment for training future teachers. However, we observed that our nonverbal language and therefore the emotions we transmit, do not match in real and virtual environments.Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu

    Automatic Recognition and Generation of Affective Movements

    Get PDF
    Body movements are an important non-verbal communication medium through which affective states of the demonstrator can be discerned. For machines, the capability to recognize affective expressions of their users and generate appropriate actuated responses with recognizable affective content has the potential to improve their life-like attributes and to create an engaging, entertaining, and empathic human-machine interaction. This thesis develops approaches to systematically identify movement features most salient to affective expressions and to exploit these features to design computational models for automatic recognition and generation of affective movements. The proposed approaches enable 1) identifying which features of movement convey affective expressions, 2) the automatic recognition of affective expressions from movements, 3) understanding the impact of kinematic embodiment on the perception of affective movements, and 4) adapting pre-defined motion paths in order to "overlay" specific affective content. Statistical learning and stochastic modeling approaches are leveraged, extended, and adapted to derive a concise representation of the movements that isolates movement features salient to affective expressions and enables efficient and accurate affective movement recognition and generation. In particular, the thesis presents two new approaches to fixed-length affective movement representation based on 1) functional feature transformation, and 2) stochastic feature transformation (Fisher scores). The resulting representations are then exploited for recognition of affective expressions in movements and for salient movement feature identification. For functional representation, the thesis adapts dimensionality reduction techniques (namely, principal component analysis (PCA), Fisher discriminant analysis, Isomap) for functional datasets and applies the resulting reduction techniques to extract a minimal set of features along which affect-specific movements are best separable. Furthermore, the centroids of affect-specific clusters of movements in the resulting functional PCA subspace along with the inverse mapping of functional PCA are used to generate prototypical movements for each affective expression. The functional discriminative modeling is however limited to cases where affect-specific movements also have similar kinematic trajectories and does not address the interpersonal and stochastic variations inherent to bodily expression of affect. To account for these variations, the thesis presents a novel affective movement representation in terms of stochastically-transformed features referred to as Fisher scores. The Fisher scores are derived from affect-specific hidden Markov model encoding of the movements and exploited to discriminate between different affective expressions using a support vector machine (SVM) classification. Furthermore, the thesis presents a new approach for systematic identification of a minimal set of movement features most salient to discriminating between different affective expressions. The salient features are identified by mapping Fisher scores to a low-dimensional subspace where dependencies between the movements and their affective labels are maximized. This is done by maximizing Hilbert Schmidt independence criterion between the Fisher score representation of movements and their affective labels. The resulting subspace forms a suitable basis for affective movement recognition using nearest neighbour classification and retains the high recognition rates achieved by SVM classification in the Fisher score space. The dimensions of the subspace form a minimal set of salient features and are used to explore the movement kinematic and dynamic cues that connote affective expressions. Furthermore, the thesis proposes the use of movement notation systems from the dance community (specifically, the Laban system) for abstract coding and computational analysis of movement. A quantification approach for Laban Effort and Shape is proposed and used to develop a new computational model for affective movement generation. Using the Laban Effort and Shape components, the proposed generation approach searches a labeled dataset for movements that are kinematically similar to a desired motion path and convey a target emotion. A hidden Markov model of the identified movements is obtained and used with the desired motion path in the Viterbi state estimation. The estimated state sequence is then used to generate a novel movement that is a version of the desired motion path, modulated to convey the target emotion. Various affective human movement corpora are used to evaluate and demonstrate the efficacy of the developed approaches for the automatic recognition and generation of affective expressions in movements. Finally, the thesis assesses the human perception of affective movements and the impact of display embodiment and the observer's gender on the affective movement perception via user studies in which participants rate the expressivity of synthetically-generated and human-generated affective movements animated on anthropomorphic and non-anthropomorphic embodiments. The user studies show that the human perception of affective movements is mainly shaped by intended emotions, and that the display embodiment and the observer's gender can significantly impact the perception of affective movements

    Analyse et synthèse de mouvements théâtraux expressifs

    Get PDF
    This thesis addresses the analysis and generation of expressive movements for virtual human character. Based on previous results from three different research areas (perception of emotions and biological motion, automatic recognition of affect and computer character animation), a low-dimensional motion representation is proposed. This representation consists of the spatio-temporal trajectories of end-effectors (i.e., head, hands and feet), and pelvis. We have argued that this representation is both suitable and sufficient for characterizing the underlying expressive content in human motion, and for controlling the generation of expressive whole-body movements. In order to prove these claims, this thesis proposes: (i) A new motion capture database inspired by physical theory, which contains three categories of motion (locomotion, theatrical and improvised movements), has been built for several actors; (ii) An automatic classification framework has been designed to qualitatively and quantitatively assess the amount of emotion contained in the data. It has been shown that the proposed low-dimensional representation preserves most of the motion cues salient to the expression of affect and emotions; (iii) A motion generation system has been implemented, both for reconstructing whole-body movements from the low-dimensional representation, and for producing novel end-effector expressive trajectories. A quantitative and qualitative evaluation of the generated whole body motions shows that these motions are as expressive as the movements recorded from human actors.Cette thèse porte sur l'analyse et la génération de mouvements expressifs pour des personnages humains virtuels. Sur la base de résultats de l’état de l’art issus de trois domaines de recherche différents - la perception des émotions et du mouvement biologique, la reconnaissance automatique des émotions et l'animation de personnages virtuels - une représentation en faible dimension des mouvements constituée des trajectoires spatio-temporelles des extrémités des chaînes articulées (tête, mains et pieds) et du pelvis a été proposée. Nous avons soutenu que cette représentation est à la fois appropriée et suffisante pour caractériser le contenu expressif du mouvement humain et pour contrôler la génération de mouvements corporels expressifs. Pour étayer cette affirmation, cette thèse propose:i) une nouvelle base de données de capture de mouvements inspirée par la théorie du théâtre physique. Cette base de données contient des exemples de différentes catégories de mouvements (c'est-à-dire des mouvements périodiques, des mouvements fonctionnels, des mouvements spontanés et des séquences de mouvements théâtraux), produits avec des états émotionnels distincts (joie, tristesse, détente, stress et neutre) et interprétés par plusieurs acteurs.ii) Une étude perceptuelle et une approche basée classification automatique conçus pour évaluer qualitativement et quantitativement l'information liée aux émotions véhiculées et encodées dans la représentation proposée. Nous avons observé que, bien que de légères différences dans la performance aient été trouvées par rapport à la situation où le corps entier a été utilisé, notre représentation conserve la plupart des marqueurs de mouvement liés à l'expression de laffect et des émotions.iii) Un système de synthèse de mouvement capable : a) de reconstruire des mouvements du corps entier à partir de la représentation à faible dimension proposée et b) de produire de nouvelles trajectoires extrémités expressives (incluant la trajectoire du bassin). Une évaluation quantitative et qualitative des mouvements du corps entier générés montre que ces mouvements sont aussi expressifs que les mouvements enregistrés à partir d'acteurs humains

    The Effect of Posture and Dynamics on the Perception of Emotion

    Get PDF
    Figure 1: Characteristic frame for each emotion from the clips with the best recognition rates. Motion capture remains a popular and widely-used method for animating virtual characters. However, all practical applications of motion capture rely on motion editing techniques to increase the reusability and flexibility of captured motions. Because humans are proficient in detecting and interpreting subtle details in human motion, understanding the perceptual consequences of motion editing is essential. Thus in this work, we perform three experiments to gain a better understanding of how motion editing might affect the emotional content of a captured performance, particularly changes in posture and dynamics, two factors shown to be important perceptual indicators of bodily emotions. In these studies, we analyse the properties (angles and velocities) and perception (recognition rates and perceived intensities) of a varied set of full-body motion clips representing the six emotions anger, disgust, fear, happiness, sadness, and surprise. We have found that emotions are mostly conveyed through the upper body, that the perceived intensity of an emotion can be reduced by blending with a neutral motion, and that posture changes can alter the perceived emotion but subtle changes in dynamics only alter the intensity
    corecore