Graph Edge Convolutional Neural Networks for Skeleton-Based Action Recognition

Abstract

Body joints, directly obtained from a pose estimation model, have proven effective for action recognition. Existing works focus on analyzing the dynamics of human joints. However, except joints, humans also explore motions of limbs for understanding actions. Given this observation, we investigate the dynamics of human limbs for skeleton-based action recognition. Specifically, we represent an edge in a graph of a human skeleton by integrating its spatial neighboring edges (for encoding the cooperation between different limbs) and its temporal neighboring edges (for achieving the consistency of movements in an action). Based on this new edge representation, we devise a graph edge convolutional neural network. Considering the complementarity between graph node convolution and edge convolution, we further construct two hybrid networks by introducing different shared intermediate layers to integrate graph node- and edge-convolutional neural networks. Our contributions are twofold, graph edge convolution and hybrid networks for integrating the proposed edge convolution and the conventional node convolution. Experimental results on the Kinetics and NTU-RGB+D datasets demonstrate that our graph edge convolution is effective at capturing the characteristics of actions and that our graph edge convolutional neural network significantly outperforms existing state-of-the-art skeleton-based action recognition methods

    Similar works

    Full text

    thumbnail-image

    Available Versions