8 research outputs found

    MSA-GCN:Multiscale Adaptive Graph Convolution Network for Gait Emotion Recognition

    Full text link
    Gait emotion recognition plays a crucial role in the intelligent system. Most of the existing methods recognize emotions by focusing on local actions over time. However, they ignore that the effective distances of different emotions in the time domain are different, and the local actions during walking are quite similar. Thus, emotions should be represented by global states instead of indirect local actions. To address these issues, a novel Multi Scale Adaptive Graph Convolution Network (MSA-GCN) is presented in this work through constructing dynamic temporal receptive fields and designing multiscale information aggregation to recognize emotions. In our model, a adaptive selective spatial-temporal graph convolution is designed to select the convolution kernel dynamically to obtain the soft spatio-temporal features of different emotions. Moreover, a Cross-Scale mapping Fusion Mechanism (CSFM) is designed to construct an adaptive adjacency matrix to enhance information interaction and reduce redundancy. Compared with previous state-of-the-art methods, the proposed method achieves the best performance on two public datasets, improving the mAP by 2\%. We also conduct extensive ablations studies to show the effectiveness of different components in our methods

    Social Perception of Pedestrians and Virtual Agents Using Movement Features

    Get PDF
    In many tasks such as navigation in a shared space, humans explicitly or implicitly estimate social information related to the emotions, dominance, and friendliness of other humans around them. This social perception is critical in predicting others’ motions or actions and deciding how to interact with them. Therefore, modeling social perception is an important problem for robotics, autonomous vehicle navigation, and VR and AR applications. In this thesis, we present novel, data-driven models for the social perception of pedestrians and virtual agents based on their movement cues, including gaits, gestures, gazing, and trajectories. We use deep learning techniques (e.g., LSTMs) along with biomechanics to compute the gait features and combine them with local motion models to compute the trajectory features. Furthermore, we compute the gesture and gaze representations using psychological characteristics. We describe novel mappings between these computed gaits, gestures, gazing, and trajectory features and the various components (emotions, dominance, friendliness, approachability, and deception) of social perception. Our resulting data-driven models can identify the dominance, deception, and emotion of pedestrians from videos with an accuracy of more than 80%. We also release new datasets to evaluate these methods. We apply our data-driven models to socially-aware robot navigation and the navigation of autonomous vehicles among pedestrians. Our method generates robot movement based on pedestrians’ dominance levels, resulting in higher rapport and comfort. We also apply our data-driven models to simulate virtual agents with desired emotions, dominance, and friendliness. We perform user studies and show that our data-driven models significantly increase the user’s sense of social presence in VR and AR environments compared to the baseline methods.Doctor of Philosoph

    Identifying Emotions from Non-Contact Gaits Information Based on Microsoft Kinects

    No full text
    Automatic emotion recognition from gaits information is discussed in this paper, which has been investigated widely in the fields of human-machine interaction, psychology, psychiatry, behavioral science, etc. The gaits information is non-contact, collected from Microsoft kinects, and contains 3-dimensional coordinates of 25 joints per person. These joints coordinates vary with the time. So, by the discrete Fourier transform and statistic methods, some time-frequency features related to neutral, happy and angry emotion are extracted and used to establish the classification model to identify these three emotions. Experimental results show this model works very well, and time-frequency features are effective in characterizing and recognizing emotions for this non-contact gait data. In particular, by the optimization algorithm, the recognition accuracy can be further averagely improved by about 13.7 percent

    Identifying Emotions from Non-Contact Gaits Information Based on Microsoft Kinects

    No full text

    Workshop, Long and Short Paper, and Poster Proceedings from the Fourth Immersive Learning Research Network Conference (iLRN 2018 Montana), 2018.

    Get PDF
    ILRN 2018 - Conferência internacional realizada em Montana de 24-29 de june de 2018.Workshop, short paper, and long paper proceedingsinfo:eu-repo/semantics/publishedVersio

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
    corecore