181,273 research outputs found

    Human motion modeling and simulation by anatomical approach

    Get PDF
    To instantly generate desired infinite realistic human motion is still a great challenge in virtual human simulation. In this paper, the novel emotion effected motion classification and anatomical motion classification are presented, as well as motion capture and parameterization methods. The framework for a novel anatomical approach to model human motion in a HTR (Hierarchical Translations and Rotations) file format is also described. This novel anatomical approach in human motion modelling has the potential to generate desired infinite human motion from a compact motion database. An architecture for the real-time generation of new motions is also propose

    Emotion sensing from head motion capture

    Get PDF
    Computational analysis of emotion from verbal and non-verbal behavioral cues is critical for human-centric intelligent systems. Among the non-verbal cues, head motion has received relatively less attention, although its importance has been noted in several research. We propose a new approach for emotion recognition using head motion captured using Motion Capture (MoCap). Our approach is motivated by the well known kinesics-phonetic analogy, which advocates that, analogous to human speech being composed of phonemes, head motion is composed of kinemes i.e., elementary motion units. We discover a set of kinemes from head motion in an unsupervised manner by projecting them onto a learned basis domain and subsequently clustering them. This transforms any head motion to a sequence of kinemes. Next, we learn the temporal latent structures within the kineme sequence pertaining to each emotion. For this purpose, we explore two separate approaches – one using Hidden Markov Model and another using artificial neural network. This class-specific, kineme-based representation of head motion is used to perform emotion recognition on the popular IEMOCAP database. We achieve high recognition accuracy (61.8% for three class) for various emotion recognition tasks using head motion alone. This work adds to our understanding of head motion dynamics, and has applications in emotion analysis and head motion animation and synthesis

    Emotion Transfer for Hand Animation

    Get PDF
    We propose a new data-driven framework for synthesizing hand motion at different emotion levels. Specifically, we first capture high-quality hand motion using VR gloves. The hand motion data is then annotated with the emotion type and a latent space is constructed from the motions to facilitate the motion synthesis process. By interpolating the latent representation of the hand motion, new hand animation with different levels of emotion strength can be generated. Experimental results show that our framework can produce smooth and consistent hand motions at an interactive rate

    On the role of head motion in affective expression

    Get PDF
    Non-verbal behavioral cues, such as head movement, play a significant role in human communication and affective expression. Although facial expression and gestures have been extensively studied in the context of emotion understanding, the head motion (which accompany both) is relatively less understood. This paper studies the significance of head movement in adult's affect communication using videos from movies. These videos are taken from the Acted Facial Expression in the Wild (AFEW) database and are labeled with seven basic emotion categories: anger, disgust, fear, joy, neutral, sadness, and surprise. Considering human head as a rigid body, we estimate the head pose at each video frame in terms of the three Euler angles, and obtain a time-series representation of head motion. First, we investigate the importance of the energy of angular head motion dynamics (displacement, velocity and acceleration) in discriminating among emotions. Next, we analyze the temporal variation of head motion by fitting an autoregressive model to the head motion time series. We observe that head motion carries sufficient information to distinguish any emotion from the rest with high accuracy and this information is complementary to that of facial expression as it helps improve emotion recognition accuracy

    Elderly Motion Analysis to Estimate Emotion: A Systematic Review

    Get PDF
    This paper presents a systematic review focusing on motion analysisbased emotion estimation in the elderly. Addressing a critical concern, it highlights the challenge of effectively monitoring emotions in older adults and emphasizes the potential development of serious disorders resulting from emotional neglect. The study underscores the importance of emotional well-being in care facilities, where the willingness of elderly individuals to receive care is closely tied to their emotional state. Health practitioners often encounter difficulties when elderly individuals resist care due to emotional dissatisfaction, making monitoring changes in emotional states essential and necessitating comprehensive care records. Through an exhaustive examination of existing literature, the paper suggests that motion-based emotion recognition shows promise in addressing this challenge. Utilizing the PRISMA protocol, the study conducts a qualitative analysis of the impact of motion analysis on emotion estimation. It outlines the current methodologies employed in research and reveals a significant correlation between body motion cues and emotional states in the elderly. Furthermore, it positions motion-based emotion estimation as a viable solution for addressing emotional well-being in older adults and offers guidelines for researchers interested in this area. Based on our study we consider the first review of this kind on motion-based emotion estimation for the elderly, providing insights into potential advancements in addressing emotional well-being in this demographic

    Types of middle voice in Indonesian language (Tipe-tipe diatesis medial dalam Bahasa Indonesia)

    Get PDF
    As the national language, Indonesian language has been often used as the object of linguistic study conducted by both local and foreign linguists. In this case, this study is concerned with the types and morphological structure of verbs in Indonesian middle voice. Data was gained through interview method to some speakers of Indonesian. The data was completed by those taken from daily newspaper Bali Post. Based on the analysis it was found that the middle voice in Indonesian can be distinguished into lexical, morphological, and periphrastic middle. Lexical middle is only constructed by zero intransitive verbs and the action conducted by ACTOR refers back to the ACTOR. Morphological middle results from affixes (ber-) and (ber-/-an) attached to verb and noun bases. Periphrastic middle is possibly derived from morphological middle. Affixes commonly used to produce periphrastic middle are transitive affixes (meN-), (meN-/-kan), and (meN-/-i) with their various forms depending on the initial phoneme of the bases. Semantically, middle voice in Indonesian Language is classified into ten types in accordance with what is proposed by Kemmer, they are (1) grooming or body action middle, (2) change in body posture middle, (3) non-translational motion middle, (4) translation motion middle, (5) indirect middle, (6) emotion middle, (7) cognitive middle, (8) spontaneous middle, (9) reciprocal situation middle, and (10) middle of action of emotion

    Adults\u27 and Children\u27s Identification of Faces and Emotions from Isolated Motion Cues

    Get PDF
    Faces communicate a wealth of information, including cues to others’ internal emotional states. Face processing is often studied using static stimuli; however, in real life, faces are dynamic. The current project examines face detection and emotion recognition from isolated motion cues. Across two studies, facial motion is presented in point-light displays (PLDs), in which moving white dots against a black screen correspond to dynamic regions of the face. In Study 1, adults were asked to identify the upright facial motion of five basic emotional expressions (e.g., surprise) and five neutral non-rigid movements (e.g., yawning) versus inverted and scrambled distractors. Prior work with static stimuli finds that certain cues, including the addition of motion information, the spatial arrangement of elements, and the emotional significance of stimuli affect face detection. This study found significant effects involving each of these factors using facial PLDs. Notably, face detection was most accurate in response to face-like arrangements, and motion information was useful in response to unusual point configurations. These results suggest that similar processes underlie the processing of static face images and isolated facial motion cues. In Study 2, children and adults were asked to match PLDs of emotional expressions to their corresponding labels (e.g., match a smiling PLD with the word “happy”). Prior work with face images finds that emotion recognition improves with age, but the developmental trajectory depends critically on the emotion to be recognized. Emotion recognition in response to PLDs improved with age, and there were different trajectories across the five emotions tested. Overall, this dissertation contributes to the understanding of the influence of motion information in face processing and emotion recognition, by demonstrating that there are similarities in how people process full-featured static faces and isolated facial motion cues in PLDs (which lack features). The finding that even young children can detect emotions from isolated facial motion indicates that features are not needed for making these types of social judgments. PLD stimuli hold promise for future interventions with atypically developing populations
    • …
    corecore