2 research outputs found

    Learning a bidirectional mapping between human whole-body motion and natural language using deep recurrent neural networks

    Full text link
    Linking human whole-body motion and natural language is of great interest for the generation of semantic representations of observed human behaviors as well as for the generation of robot behaviors based on natural language input. While there has been a large body of research in this area, most approaches that exist today require a symbolic representation of motions (e.g. in the form of motion primitives), which have to be defined a-priori or require complex segmentation algorithms. In contrast, recent advances in the field of neural networks and especially deep learning have demonstrated that sub-symbolic representations that can be learned end-to-end usually outperform more traditional approaches, for applications such as machine translation. In this paper we propose a generative model that learns a bidirectional mapping between human whole-body motion and natural language using deep recurrent neural networks (RNNs) and sequence-to-sequence learning. Our approach does not require any segmentation or manual feature engineering and learns a distributed representation, which is shared for all motions and descriptions. We evaluate our approach on 2,846 human whole-body motions and 6,187 natural language descriptions thereof from the KIT Motion-Language Dataset. Our results clearly demonstrate the effectiveness of the proposed model: We show that our model generates a wide variety of realistic motions only from descriptions thereof in form of a single sentence. Conversely, our model is also capable of generating correct and detailed natural language descriptions from human motions

    TuD11.1 Interactive Topology Formation of Linguistic Space and Motion Space

    No full text
    Abstract — A hierarchical model incorporating motion patterns, proto symbols and words is proposed. The proto symbols abstract motion patterns, while the words are associated with the proto symbols stochastically. This paper describes the construction of a word space, where words are located in a multidimensional space based on dissimilarities among the words. The dissimilarity between two words can be calculated by using association probabilities that the words generate motion proto symbols. The word space encapsulates relations among the words such as similar or dissimilar pairs of words. The word space also allows motion recognition based on words. The validity of the constructed word space is demonstrated on a motion capture database. Moreover, the addition of the word associations is found to change the conventional proto symbol space so that the discrimination among the proto symbols is improved. I
    corecore