4 research outputs found

    ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action Unit

    Full text link
    Dance and music are two highly correlated artistic forms. Synthesizing dance motions has attracted much attention recently. Most previous works conduct music-to-dance synthesis via directly music to human skeleton keypoints mapping. Meanwhile, human choreographers design dance motions from music in a two-stage manner: they firstly devise multiple choreographic dance units (CAUs), each with a series of dance motions, and then arrange the CAU sequence according to the rhythm, melody and emotion of the music. Inspired by these, we systematically study such two-stage choreography approach and construct a dataset to incorporate such choreography knowledge. Based on the constructed dataset, we design a two-stage music-to-dance synthesis framework ChoreoNet to imitate human choreography procedure. Our framework firstly devises a CAU prediction model to learn the mapping relationship between music and CAU sequences. Afterwards, we devise a spatial-temporal inpainting model to convert the CAU sequence into continuous dance motions. Experimental results demonstrate that the proposed ChoreoNet outperforms baseline methods (0.622 in terms of CAU BLEU score and 1.59 in terms of user study score).Comment: 10 pages, 5 figures, Accepted by ACM MM 202

    Tracing from Sound to Movement with Mixture Density Recurrent Neural Networks

    No full text
    In this work, we present a method for generating sound-tracings using a mixture density recurrent neural network (MDRNN). A sound-tracing is a rendering of perceptual qualities of short sound objects through body motion. The model is trained on a dataset of single point sound-tracings with multimodal input data and learns to generate novel tracings. We use a second neural network classifier to show that the input sound can be identified from generated tracings. This is part of an ongoing research effort to examine the complex correlations between sound and movement and the possibility of modelling these relationships using deep learning.This work was partially supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262762
    corecore