Recent increase of remote-work, online meeting and tele-operation task makes
people find that gesture for avatars and communication robots is more important
than we have thought. It is one of the key factors to achieve smooth and
natural communication between humans and AI systems and has been intensively
researched. Current gesture generation methods are mostly based on deep neural
network using text, audio and other information as the input, however, they
generate gestures mainly based on audio, which is called a beat gesture.
Although the ratio of the beat gesture is more than 70% of actual human
gestures, content based gestures sometimes play an important role to make
avatars more realistic and human-like. In this paper, we propose a
attention-based contrastive learning for text-to-gesture (ACT2G), where
generated gestures represent content of the text by estimating attention weight
for each word from the input text. In the method, since text and gesture
features calculated by the attention weight are mapped to the same latent space
by contrastive learning, once text is given as input, the network outputs a
feature vector which can be used to generate gestures related to the content.
User study confirmed that the gestures generated by ACT2G were better than
existing methods. In addition, it was demonstrated that wide variation of
gestures were generated from the same text by changing attention weights by
creators