14 research outputs found

    Piavca: a framework for heterogeneous interactions with virtual characters

    Get PDF
    This paper presents a virtual character animation system for real time multimodal interaction in an immersive virtual reality setting. Human to human interaction is highly multimodal, involving features such as verbal language, tone of voice, facial expression, gestures and gaze. This multimodality means that, in order to simulate social interaction, our characters must be able to handle many different types of interaction, and many different types of animation, simultaneously. Our system is based on a model of animation that represents different types of animations as instantiations of an abstract function representation. This makes it easy to combine different types of animation. It also encourages the creation of behavior out of basic building blocks. making it easy to create and configure new beahviors for novel situations. The model has been implemented in Piavca, an open source character animation system

    Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping

    Full text link
    We present an autoencoder-based semi-supervised approach to classify perceived human emotions from walking styles obtained from videos or motion-captured data and represented as sequences of 3D poses. Given the motion on each joint in the pose at each time step extracted from 3D pose sequences, we hierarchically pool these joint motions in a bottom-up manner in the encoder, following the kinematic chains in the human body. We also constrain the latent embeddings of the encoder to contain the space of psychologically-motivated affective features underlying the gaits. We train the decoder to reconstruct the motions per joint per time step in a top-down manner from the latent embeddings. For the annotated data, we also train a classifier to map the latent embeddings to emotion labels. Our semi-supervised approach achieves a mean average precision of 0.84 on the Emotion-Gait benchmark dataset, which contains both labeled and unlabeled gaits collected from multiple sources. We outperform current state-of-art algorithms for both emotion recognition and action recognition from 3D gaits by 7%--23% on the absolute. More importantly, we improve the average precision by 10%--50% on the absolute on classes that each makes up less than 25% of the labeled part of the Emotion-Gait benchmark dataset.Comment: In proceedings of the 16th European Conference on Computer Vision, 2020. Total pages 18. Total figures 5. Total tables

    Exemplar-Based Human Action Recognition with Template Matching from a Stream of Motion Capture

    No full text

    Ultrasound Image Dataset for Image Analysis Algorithms Evaluation

    Get PDF
    The use of ultrasound (US) imaging as an alternative for real-time computer assisted interventions is increasing -- Growing usage of US occurs de-spite of US lower imaging quality compared to other techniques and its diffi-culty to be used with image analysis algorithms -- On the other hand, it is still difficult to find sufficient data to develop and assess solutions for navigation, registration and reconstruction at medical research level -- At present, manually acquired available datasets present significant usability obstacles due to their lack of control of acquisition conditions, which hinders the study and correction of algorithm design parameters -- To address these limitations, we present a data-base of robotically acquired sequences of US images from medical phantoms, ensuring the trajectory, pose and force control of the probe -- The acquired data-set is publicly available, and it is specially useful for designing and testing reg-istration and volume reconstruction algorithm

    Parametrization and Range of Motion of the Ball-and-Socket Joint

    No full text
    The ball-and-socket joint model is used to represent articulations with three rotational degrees of freedom (DOF), such as the human shoulder and the hip. The goal of this paper is to discuss two related problems: the parametrization and the definition of realistic joint boundaries for ball-andsocket joints. Doing this accurately is difficult, yet important for motion generators (such as inverse kinematics and dynamics engines) and for motion manipulators (such as motion retargeting), since the resulting motions should satisfy the anatomic constraints. The difficulty mainly comes from the complex nature of 3D orientations and of human articulations. The underlying question of parametrization must be addressed before realistic and meaningful boundaries can be defined over the set of 3D orientations. In this paper, we review and compare several known methods, and advocate the use of the swing-and-twist parametrization, that partitions an arbitrary orientation into two meaningful components. The related problem of induced twist is discussed. Finally, we review some joint boundaries representations based on this decomposition, and show an example

    Video-Based People Tracking

    No full text
    Vision-based human pose tracking promises to be a key enabling technology for myriad applications, including the analysis of human activities for perceptive environments and novel man-machine interfaces. While progress toward that goal has been exciting, and limited applications have been demonstrated, the recovery of huma
    corecore