2 research outputs found

    Tracing from Sound to Movement with Mixture Density Recurrent Neural Networks

    No full text
    In this work, we present a method for generating sound-tracings using a mixture density recurrent neural network (MDRNN). A sound-tracing is a rendering of perceptual qualities of short sound objects through body motion. The model is trained on a dataset of single point sound-tracings with multimodal input data and learns to generate novel tracings. We use a second neural network classifier to show that the input sound can be identified from generated tracings. This is part of an ongoing research effort to examine the complex correlations between sound and movement and the possibility of modelling these relationships using deep learning.This work was partially supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262762
    corecore