194,703 research outputs found
Cross-Modal Learning with 3D Deformable Attention for Action Recognition
An important challenge in vision-based action recognition is the embedding of
spatiotemporal features with two or more heterogeneous modalities into a single
feature. In this study, we propose a new 3D deformable transformer for action
recognition with adaptive spatiotemporal receptive fields and a cross-modal
learning scheme. The 3D deformable transformer consists of three attention
modules: 3D deformability, local joint stride, and temporal stride attention.
The two cross-modal tokens are input into the 3D deformable attention module to
create a cross-attention token with a reflected spatiotemporal correlation.
Local joint stride attention is applied to spatially combine attention and pose
tokens. Temporal stride attention temporally reduces the number of input tokens
in the attention module and supports temporal expression learning without the
simultaneous use of all tokens. The deformable transformer iterates L times and
combines the last cross-modal token for classification. The proposed 3D
deformable transformer was tested on the NTU60, NTU120, FineGYM, and Penn
Action datasets, and showed results better than or similar to pre-trained
state-of-the-art methods even without a pre-training process. In addition, by
visualizing important joints and correlations during action recognition through
spatial joint and temporal stride attention, the possibility of achieving an
explainable potential for action recognition is presented.Comment: 10 pages, 8 figure
- …