3 research outputs found

    Embodied Cognitive Science of Music. Modeling Experience and Behavior in Musical Contexts

    Get PDF
    Recently, the role of corporeal interaction has gained wide recognition within cognitive musicology. This thesis reviews evidence from different directions in music research supporting the importance of body-based processes for the understanding of music-related experience and behaviour. Stressing the synthetic focus of cognitive science, cognitive science of music is discussed as a modeling approach that takes these processes into account and may theoretically be embedded within the theory of dynamic systems. In particular, arguments are presented for the use of robotic devices as tools for the investigation of processes underlying human music-related capabilities (musical robotics)

    D'émotion et de GRACE (vers un modèle computationnel unifié des émotions)

    Get PDF
    Les psychologues (comme A. Damasio, K. R. Scherer, P. Ekman) ont montré que l émotion est un élément essentiel dans la prise de décision, dans l évolution des capacités d apprentissage et de création, dans l interaction sociale. Il est donc naturel de s'intéresser à l'expression d'émotions dans le cadre de l'interaction homme-machine. Nous avons proposé dans un premier temps le modèle GRACE, modèle générique des émotions pour les applications computationnelles. Nous nous sommes basés en particulier sur la théorie psychologique de K. R. Scherer, qui cherche à produire une théorie des processus émotionnels qui soit modélisable et calculable. La pertinence de notre modèle a été vérifiée et validée via une comparaison avec différents modèles computationnels existants. Si le modèle GRACE est générique, nous nous sommes attachés à montrer qu il pouvait s instancier dans un contexte particulier, en l occurrence l interaction homme-robot utilisant la modalité musicale. Nous nous sommes intéressés pour cela d une part à la conception d un module d analyse du contenu émotionnel d une séquence musicale, d autre part à la conception de mouvements émotionnellement expressifs pour un robot mobile. Du point de vue de l analyse musicale, la contribution principale de la thèse porte sur la proposition d un ensemble réduit d indicateurs musicaux et sur la validation du module d analyse sur une base de données de grande taille conçue par un expert en musicologie. Du point de vue de la robotique, nous avons pu montrer expérimentalement qu un robot avec des capacités expressives très limitées (déplacements, mouvements de caméra) pouvait néanmoins exprimer de manière satisfaisante un ensemble réduit d émotions simples (joie, colère, tristesse, sérénité).Emotion, as psychologists argue (like A. Damasio, K. R. Scherer, P. Ekman), is an essential factor for human beings in making decision, learning, inventing things, and interacting with others. Based on this statement, researchers in Human-Machine Interaction have been interested in adding emotional abilities to their applications. With the same goal of studying emotional abilities, we propose, in our work, a model of emotions named GRACE, which helps in modelling emotions in computational applications. We based our model on the work of psychologist Klaus R. Scherer, who intensively searches to form a generic model of emotion applicable to computational domain (like informatics, robotics, etc.). We demonstrate the pertinence of our model by comparing it to other existing models of emotions in the field of informatics and robotics. In this thesis, we also worked on the instantiation of GRACE, in particular the components Cognitive Interpretation and Expression. These two components have been developed to be applied in the context of interacting with users using music. To develop Cognitive Interpretation, we worked on the extraction of emotional content in musical excerpts. Our contribution consists in proposing a reduced number of musical features to efficiently extract the emotional content in music, and in validating them via a learning system with a large database designed by a musicologist. For Expression, we have worked on the design of emotional moves of a mobile robot. Through very limited moves (moves in space, camera moves), we have shown that with dance-inspired motions, the robot could efficiently convey basic emotions (i.e. happiness, sadness, anger, serenity) to people.EVRY-Bib. électronique (912289901) / SudocSudocFranceF

    From acoustic cues to an expressive agent

    No full text
    Abstract. This work proposes a new way for providing feedback to expressivity in music performance. Starting from studies on the expressivity of music performance we developed a system in which a visual feedback is given to the user using a graphical representation of a human face. The first part of the system, previously developed by researchers at KTH Stockholm and at the University of Uppsala, allows the real-time extraction and analysis of acoustic cues from the music performance. Cues extracted are: sound level, tempo, articulation, attack time, and spectrum energy. From these cues the system provides an high level interpretation of the emotional intention of the performer which will be classified into one basic emotion, such as happiness, sadness, or anger. We have implemented an interface between that system and the embodied conversational agent Greta, developed at the University of Rome “La Sapienza ” and “University of Paris 8”. We model expressivity of the facial animation of the agent with a set of six dimensions that characterize the manner of behavior execution. In this paper we will first describe a mapping between the acoustic cues and the expressivity dimensions of the face. Then we will show how to determine the facial expression corresponding to the emotional intention resulting from the acoustic analysis, using music sound level and tempo characteristics to control the intensity and the temporal variation of muscular activation.
    corecore