4 research outputs found

    Dance-the-music : an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Get PDF
    In this article, a computational platform is presented, entitled “Dance-the-Music”, that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers’ models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method can determine the quality of a student’s performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures

    Perceptually motivated automatic dance motion generation for music

    No full text
    In this paper, we describe a novel method to automatically generate synchronized dance motion that is perceptually matched to a given musical piece. The proposed method extracts thirty musical features from musical data as well as thirty seven motion features from motion data. A matching process is then performed between the two feature spaces considering the correspondence of the relative changes in both feature spaces and the correlations between musical and motion features. Similarity matrices are introduced to match the amount of relative changes in both feature spaces and correlation coefficients are used to establish the correlations between musical features and motion features by measuring the strength of correlation between each pair of the musical and motion features. By doing this, the progressions of musical and dance motion patterns and perceptual changes between two consecutive musical and motion segments are matched. To demonstrate the effectiveness of the proposed approach, we designed and carried out a user opinion study to assess the perceived quality of the proposed approach. The statistical analysis of the user study results showed that the proposed approach generated results that were significantly better than those produced using a random walk through the dance motion database. The approach suggested in this dissertation can be applied to a number of application areas including film, TV commercials, virtual reality applications, computer games and entertainment systems

    Example Based Caricature Synthesis

    Get PDF
    The likeness of a caricature to the original face image is an essential and often overlooked part of caricature production. In this paper we present an example based caricature synthesis technique, consisting of shape exaggeration, relationship exaggeration, and optimization for likeness. Rather than relying on a large training set of caricature face pairs, our shape exaggeration step is based on only one or a small number of examples of facial features. The relationship exaggeration step introduces two definitions which facilitate global facial feature synthesis. The first is the T-Shape rule, which describes the relative relationship between the facial elements in an intuitive manner. The second is the so called proportions, which characterizes the facial features in a proportion form. Finally we introduce a similarity metric as the likeness metric based on the Modified Hausdorff Distance (MHD) which allows us to optimize the configuration of facial elements, maximizing likeness while satisfying a number of constraints. The effectiveness of our algorithm is demonstrated with experimental results
    corecore