Recurrent Neural Networks for Modeling Motion Capture Data

Abstract

This thesis introduces a Recurrent Neural Network (RNN) framework as a generative model for synthesizing human motion capture data. The skeleton model of the data used is a complex human skeleton model with 64 joints, resulting in a total of 192 degrees-of-freedom, which makes our data nearly three times as complex as in previous approaches of applying neural networks on motion capture data. The RNN model can generates good quality and novel human motion sequences that can, at times, be difficult to visually distinguish from real motion capture data, which demonstrates the ability of RNNs to analyze long and very high-dimensional sequences. Additionally, the synthesized motion sequences show strong inter-joint correspondence and extending up to 250 frames. The quality of the motion and its accuracy is analyzed quantitatively through various metrics that evaluate inter-joint relationships and temporal joint correspondence

    Similar works