3,366 research outputs found

    Synthesis of variable dancing styles based on a compact spatiotemporal representation of dance

    Get PDF
    Dance as a complex expressive form of motion is able to convey emotion, meaning and social idiosyncrasies that opens channels for non-verbal communication, and promotes rich cross-modal interactions with music and the environment. As such, realistic dancing characters may incorporate crossmodal information and variability of the dance forms through compact representations that may describe the movement structure in terms of its spatial and temporal organization. In this paper, we propose a novel method for synthesizing beatsynchronous dancing motions based on a compact topological model of dance styles, previously captured with a motion capture system. The model was based on the Topological Gesture Analysis (TGA) which conveys a discrete three-dimensional point-cloud representation of the dance, by describing the spatiotemporal variability of its gestural trajectories into uniform spherical distributions, according to classes of the musical meter. The methodology for synthesizing the modeled dance traces back the topological representations, constrained with definable metrical and spatial parameters, into complete dance instances whose variability is controlled by stochastic processes that considers both TGA distributions and the kinematic constraints of the body morphology. In order to assess the relevance and flexibility of each parameter into feasibly reproducing the style of the captured dance, we correlated both captured and synthesized trajectories of samba dancing sequences in relation to the level of compression of the used model, and report on a subjective evaluation over a set of six tests. The achieved results validated our approach, suggesting that a periodic dancing style, and its musical synchrony, can be feasibly reproduced from a suitably parametrized discrete spatiotemporal representation of the gestural motion trajectories, with a notable degree of compression

    Emotional remapping of music to facial animation

    Get PDF
    We propose a method to extract the emotional data from a piece of music and then use that data via a remapping algorithm to automatically animate an emotional 3D face sequence. The method is based on studies of the emotional aspect of music and our parametric-based behavioral head model for face animation. We address the issue of affective communication remapping in general, i.e. translation of affective content (eg. emotions, and mood) from one communication form to another. We report on the results of our MusicFace system, which use these techniques to automatically create emotional facial animations from multiinstrument polyphonic music scores in MIDI format and a remapping rule set. ? ACM, 2006. This is the author\u27s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames, 143-149. Boston, Massachusetts: ACM. doi:10.1145/1183316.118333

    Representing ideas by animated digital models in architectural competitions

    Get PDF
    This paper presents the results of the research and of related didactic activity, about digital representation in contemporary architectural competitions. 3D digital models, and more recently requested animations, represent a powerful tool for increasing evaluation capability by jury members as well as knowledge and comprehension by common people. The high complexity in creating and animating 3D digital models has to face an unusual separation of jobs and responsibilities between atelier activities and rendering works. The research constitutes one topic of a teaching, given in the 1st degree of Architecture Sciences (Polytechnic of Turin -Italy) and involves also continuous updating about software potentialities. Aim of the didactic activity is to provide the students some critical and operative tools in order to give them the whole mastery of synthetic representation of their design ideas. We can foresee, in future architectural competitions, the implementation of 4D representation, also referring to its progresses and applications to other media, like cinema and entertainment

    Using music and motion analysis to construct 3D animations and visualisations

    Get PDF
    This paper presents a study into music analysis, motion analysis and the integration of music and motion to form creative natural human motion in a virtual environment. Motion capture data is extracted to generate a motion library, this places the digital motion model at a fixed posture. The first step in this process is to configure the motion path curve for the database and calculate the possibility that two motions were sequential through the use of a computational algorithm. Every motion is then analysed for the next possible smooth movement to connect to, and at the same time, an interpolation method is used to create the transitions between motions to enable the digital motion models to move fluently. Lastly, a searching algorithm sifts for possible successive motions from the motion path curve according to the music tempo. It was concluded that the higher ratio of rescaling a transition, the lower the degree of natural motio

    Affective communication remapping in MusicFace System

    Get PDF
    This paper addresses the issue of affective communication remapping, i.e. translation of affective content from one communication form to another. We propose a method to extract the affective data from a piece of music and then use that to animate a face. The method is based on studies of emotional aspect of music and our behavioural head model for face animation

    Using music and motion analysis to construct 3D animations and visualizations

    Get PDF
    This paper presents a study into music analysis, motion analysis and the integration of music and motion to form creative natural human motion in a virtual environment. Motion capture data is extracted to generate a motion library, this places the digital motion model at a fixed posture. The first step in this process is to configure the motion path curve for the database and calculate the possibility that two motions were sequential through the use of a computational algorithm. Every motion is then analysed for the next possible smooth movement to connect to, and at the same time, an interpolation method is used to create the transitions between motions to enable the digital motion models to move fluently. Lastly, a searching algorithm sifts for possible successive motions from the motion path curve according to the music tempo. It was concluded that the higher ratio of rescaling a transition, the lower the degree of natural motion

    Voices' inter-animation detection with ReaderBench. Modelling and assessing polyphony in CSCL chats as voice synergy

    No full text
    International audienceStarting from dialogism in which every act is perceived as a dialogue, we shift the perspective towards multi-participant chat conversations from Computer Supported Collaborative Learning in which ideas, points of view or more generally put voices interact, inter-animate and generate the context of a conversation. Within this perspective of discourse analysis, we introduce an implemented framework, ReaderBench, for modeling and automatically evaluating polyphony that emerges as an overlap or synergy of voices. Moreover, multiple evaluation factors were analyzed for quantifying the importance of a voice and various functions were experimented to best reflect the synergic effect of co- occurring voices for modeling the underlying discourse structure
    corecore