14 research outputs found

    Motion constraint

    Get PDF
    In this paper, we propose a hybrid postural control approach taking advantage of data-driven and goal-oriented methods while overcoming their limitations. In particular, we take advantage of the latent space characterizing a given motion database. We introduce a motion constraint operating in the latent space to benefit from its much smaller dimension compared to the joint space. This allows its transparent integration into a Prioritized Inverse Kinematics framework. If its priority is high the constraint may restrict the solution to lie within the motion database space. We are more interested in the alternate case of an intermediate priority level that channels the postural control through a spatiotemporal pattern representative of the motion database while achieving a broader range of goals. We illustrate this concept with a sparse database of large range full-body reach motion

    From sentence to emotion: a real-time three-dimensional graphics metaphor of emotions extracted from text

    Get PDF
    This paper presents a novel concept: a graphical representation of human emotion extracted from text sentences. The major contributions of this paper are the following. First, we present a pipeline that extracts, processes, and renders emotion of 3D virtual human (VH). The extraction of emotion is based on data mining statistic of large cyberspace databases. Second, we propose methods to optimize this computational pipeline so that real-time virtual reality rendering can be achieved on common PCs. Third, we use the Poisson distribution to transfer database extracted lexical and language parameters into coherent intensities of valence and arousal—parameters of Russell's circumplex model of emotion. The last contribution is a practical color interpretation of emotion that influences the emotional aspect of rendered VHs. To test our method's efficiency, computational statistics related to classical or untypical cases of emotion are provided. In order to evaluate our approach, we applied our method to diverse areas such as cyberspace forums, comics, and theater dialog

    Keep on Moving! Exploring Anthropomorphic Effects of Motion during Idle Moments

    Get PDF
    In this paper, we explored the effect of a robot’s subconscious gestures made during moments when idle (also called adaptor gestures) on anthropomorphic perceptions of five year old children. We developed and sorted a set of adaptor motions based on their intensity. We designed an experiment involving 20 children, in which they played a memory game with two robots. During moments of idleness, the first robot showed adaptor movements, while the second robot moved its head following basic face tracking. Results showed that the children perceived the robot displaying adaptor movements to be more human and friendly. Moreover, these traits were found to be proportional to the intensity of the adaptor movements. For the range of intensities tested, it was also found that adaptor movements were not disruptive towards the task. These findings corroborate the fact that adaptor movements improve the affective aspect of child-robot interactions (CRI) and do not interfere with the child’s performances in the task, making them suitable for CRI in educational contexts

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Piavca: a framework for heterogeneous interactions with virtual characters

    Get PDF
    This paper presents a virtual character animation system for real time multimodal interaction in an immersive virtual reality setting. Human to human interaction is highly multimodal, involving features such as verbal language, tone of voice, facial expression, gestures and gaze. This multimodality means that, in order to simulate social interaction, our characters must be able to handle many different types of interaction, and many different types of animation, simultaneously. Our system is based on a model of animation that represents different types of animations as instantiations of an abstract function representation. This makes it easy to combine different types of animation. It also encourages the creation of behavior out of basic building blocks. making it easy to create and configure new beahviors for novel situations. The model has been implemented in Piavca, an open source character animation system

    Piavca: A Framework for Heterogeneous Interactions with Virtual Characters

    Full text link

    From Sentence to Emotion: a real-time 3D graphics metaphor of emotions extracted from text

    Get PDF
    This paper presents a novel concept: a graphical representation of human emotion extracted from text sentences. The major contributions of this paper are the following. First, we present a pipeline that extracts, processes, and renders emotion of 3D virtual human (VH). The extraction of emotion is based on data mining statistic of large cyberspace databases. Second, we propose methods to optimize this computational pipeline so that real-time virtual reality rendering can be achieved on common PCs. Third, we use the Poisson distribution to transfer database extracted lexical and language parameters into coherent intensities of valence and arousal—parameters of Russell’s circumplex model of emotion. The last contribution is a practical color interpretation of emotion that influences the emotional aspect of rendered VHs. To test our method’s efficiency, computational statistics related to classical or untypical cases of emotion are provided. In order to evaluate our approach, we applied our method to diverse areas such as cyberspace forums, comics, and theater dialogs
    corecore