4,371 research outputs found

    Generative Autoregressive Networks for 3D Dancing Move Synthesis from Music

    Full text link
    This paper proposes a framework which is able to generate a sequence of three-dimensional human dance poses for a given music. The proposed framework consists of three components: a music feature encoder, a pose generator, and a music genre classifier. We focus on integrating these components for generating a realistic 3D human dancing move from music, which can be applied to artificial agents and humanoid robots. The trained dance pose generator, which is a generative autoregressive model, is able to synthesize a dance sequence longer than 5,000 pose frames. Experimental results of generated dance sequences from various songs show how the proposed method generates human-like dancing move to a given music. In addition, a generated 3D dance sequence is applied to a humanoid robot, showing that the proposed framework can make a robot to dance just by listening to music.Comment: 8 pages, 10 figure

    Training Physics-based Controllers for Articulated Characters with Deep Reinforcement Learning

    Get PDF
    In this thesis, two different applications are discussed for using machine learning techniques to train coordinated motion controllers in arbitrary characters in absence of motion capture data. The methods highlight the resourcefulness of physical simulations to generate synthetic and generic motion data that can be used to learn various targeted skills. First, we present an unsupervised method for learning loco-motion skills in virtual characters from a low dimensional latent space which captures the coordination between multiple joints. We use a technique called motor babble, wherein a character interacts with its environment by actuating its joints through uncoordinated, low-level (motor) excitation, resulting in a corpus of motion data from which a manifold latent space can be extracted. Using reinforcement learning, we then train the character to learn locomotion (such as walking or running) in the low-dimensional latent space instead of the full-dimensional joint action space. The thesis also presents an end-to-end automated framework for training physics-based characters to rhythmically dance to user-input songs. A generative adversarial network (GAN) architecture is proposed that learns to generate physically stable dance moves through repeated interactions with the environment. These moves are then used to construct a dance network that can be used for choreography. Using DRL, the character is then trained to perform these moves, without losing balance and rhythm, in the presence of physical forces such as gravity and friction

    Granular Dance

    Get PDF

    ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action Unit

    Full text link
    Dance and music are two highly correlated artistic forms. Synthesizing dance motions has attracted much attention recently. Most previous works conduct music-to-dance synthesis via directly music to human skeleton keypoints mapping. Meanwhile, human choreographers design dance motions from music in a two-stage manner: they firstly devise multiple choreographic dance units (CAUs), each with a series of dance motions, and then arrange the CAU sequence according to the rhythm, melody and emotion of the music. Inspired by these, we systematically study such two-stage choreography approach and construct a dataset to incorporate such choreography knowledge. Based on the constructed dataset, we design a two-stage music-to-dance synthesis framework ChoreoNet to imitate human choreography procedure. Our framework firstly devises a CAU prediction model to learn the mapping relationship between music and CAU sequences. Afterwards, we devise a spatial-temporal inpainting model to convert the CAU sequence into continuous dance motions. Experimental results demonstrate that the proposed ChoreoNet outperforms baseline methods (0.622 in terms of CAU BLEU score and 1.59 in terms of user study score).Comment: 10 pages, 5 figures, Accepted by ACM MM 202
    • 

    corecore