70 research outputs found

    Using the Audio Respiration Signal for Multimodal Discrimination of Expressive Movement Qualities

    Get PDF
    In this paper we propose a multimodal approach to distinguish between movements displaying three different expressive qualities: fluid, fragmented, and impulsive movements. Our approach is based on the Event Synchronization algorithm, which is applied to compute the amount of synchronization between two low-level features extracted from multimodal data. In more details, we use the energy of the audio respiration signal captured by a standard microphone placed near to the mouth, and the whole body kinetic energy estimated from motion capture data. The method was evaluated on 90 movement segments performed by 5 dancers. Results show that fragmented movements display higher average synchronization than fluid and impulsive movements

    Automated detection of impulsive movements in HCI

    Get PDF
    This paper introduces an algorithm for automatically measuring impulsivity. This can be used as a major expressive movement feature in the development of systems for realtime analysis of emotion expression from human full-body movement, a research area which has received increased attention in the affective computing community. In particular, our algorithm is developed in the framework of the EUH2020- ICT Project DANCE aiming at investigating techniques for sensory substitution in blind people, in order to enable perception of and participation in non-verbal, artistic whole-body experiences. The algorithm was tested by applying it to a reference archive of short dance performances. The archive includes a collection of both impulsive and fluid movements. Results show that our algorithm can reliably distinguish impulsive vs. sudden performances

    From motions to emotions: Classification of Affect from Dance Movements using Deep Learning

    Get PDF
    This work investigates classification of emotions from MoCap full-body data by using Convolutional Neural Networks (CNN). Rather than addressing regular day to day activities, we focus on a more complex type of full-body movement - dance. For this purpose, a new dataset was created which contains short excerpts of the performances of professional dancers who interpreted four emotional states: anger, happiness, sadness, and insecurity. Fourteen minutes of motion capture data are used to explore different CNN architectures and data representations. The results of the four-class classification task are up to 0.79 (F1 score) on test data of other performances by the same dancers. Hence, through deep learning, this paper proposes a novel and effective method of emotion classification which can be exploited in affective interfaces

    Movement Fluidity Analysis Based on Performance and Perception

    Get PDF
    In this work we present a framework and an experimental approach to investigate human body movement qualities (i.e., the expressive components of non-verbal communication) in HCI. We first define a candidate movement quality conceptually, with the involvement of experts in the field (e.g., dancers, choreographers). Next, we collect a dataset of performances and we evaluate the perception of the chosen quality. Finally, we propose a computational model to detect the presence of the quality in a movement segment and we compare the outcomes of the model with the evaluation results. In the proposed on-going work, we apply this approach to a specific quality of movement: Fluidity. The proposed methods and models may have several applications, e.g., in emotion detection from full-body movement, interactive training of motor skills, rehabilitation

    Analysis of intrapersonal synchronization in full-body movements displaying different expressive qualities

    Get PDF
    Intrapersonal synchronization of limb movements is a relevant feature for assessing coordination of motoric behavior. In this paper, we show that it can also distinguish between full-body movements performed with different expressive qualities, namely rigidity, uidity, and impulsivity. For this purpose, we collected a dataset of movements performed by professional dancers, and annotated the perceived movement qualities with the help of a group of experts in expressive movement analysis. We computed intra personal synchronization by applying the Event Synchronization algorithm to the time-series of the speed of arms and hands. Results show that movements performed with different qualities display a significantly different amount of intra personal synchronization: Impulsive movements are the most synchronized, the uid ones show the lowest values of synchronization, and the rigid ones lay in between

    Does embodied training improve the recognition of mid-level expressive movement qualities sonification?

    Get PDF
    This research is a part of a broader project exploring how movement qualities can be recognized by means of the auditory channel: can we perceive an expressive full-body movement quality by means of its interactive sonification? The paper presents a sonification framework and an experiment to evaluate if embodied sonic training (i.e., experiencing interactive sonification of your own body movements) increases the recognition of such qualities through the auditory channel only, compared to a non-embodied sonic training condition. We focus on the sonification of two mid-level movement qualities: fragility and lightness. We base our sonification models, described in the first part, on the assumption that specific compounds of spectral features of a sound can contribute to the cross-modal perception of a specific movement quality. The experiment, described in the second part, involved 40 participants divided into two groups (embodied sonic training vs. no training). Participants were asked to report the level of lightness and fragility they perceived in 20 audio stimuli generated using the proposed sonification models. Results show that (1) both expressive qualities were correctly recognized from the audio stimuli, (2) a positive effect of embodied sonic training was observed for fragility but not for lightness. The paper is concluded by the description of the artistic performance that took place in 2017 in Genoa (Italy), in which the outcomes of the presented experiment were exploited

    The dancer in the eye: Towards a multi-layered computational framework of qualities in movement

    Get PDF
    This paper presents a conceptual framework for the analysis of expressive qualities of movement. Our perspective is to model an observer of a dance performance. The conceptual framework is made of four layers, ranging from the physical signals that sensors capture to the qualities that movement communicate (e.g., in terms of emotions). The framework aims to provide a conceptual background the development of computational systems can build upon, with a particular reference to systems analyzing a vocabulary of expressive movement qualities, and translating them to other sensory channels, such as the auditory modality. Such systems enable their users to "listen to a choreography" or to "feel a ballet", in a new kind of cross-modal mediated experience

    Computational Commensality: from theories to computational models for social food preparation and consumption in HCI

    Get PDF
    Food and eating are inherently social activities taking place, for example, around the dining table at home, in restaurants, or in public spaces. Enjoying eating with others, often referred to as “commensality,” positively affects mealtime in terms of, among other factors, food intake, food choice, and food satisfaction. In this paper we discuss the concept of “Computational Commensality,” that is, technology which computationally addresses various social aspects of food and eating. In the past few years, Human-Computer Interaction started to address how interactive technologies can improve mealtimes. However, the main focus has been made so far on improving the individual's experience, rather than considering the inherently social nature of food consumption. In this survey, we first present research from the field of social psychology on the social relevance of Food- and Eating-related Activities (F&EA). Then, we review existing computational models and technologies that can contribute, in the near future, to achieving Computational Commensality. We also discuss the related research challenges and indicate future applications of such new technology that can potentially improve F&EA from the commensality perspective
    corecore