5 research outputs found

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    A Global Framework for Motion Capture

    Get PDF
    Human motion is so complex that model-based animation techniques still do not produce enough realistic trajectories. Consequently techniques based on real trajectories are commonly used in several application fields in order to animate human-like figures. Nevertheless the use of motion capture systems still remains difficult when the movements become complex (such as gymnastic figures) or are adapted to fit geometric constraints. We propose a software system to overcome several of these drawbacks and make it possible to directly apply captured trajectories to virtual actors. This system deals with the loss of data encountered in all optical systems, the various skeletons morphologies and the blending of several motions together in real time

    Modelling multimodal expression of emotion in a virtual agent

    No full text
    Over the past few years we have been developing an expressive embodied conversational agent system. In particular, we have developed a model of multimodal behaviours that includes dynamism and complex facial expressions. The first feature refers to the qualitative execution of behaviours. Our model is based on perceptual studies and encompasses several parameters that modulate multimodal behaviours. The second feature, the model of complex expressions, follows a componential approach where a new expression is obtained by combining facial areas of other expressions. Lately we have been working on adding temporal dynamism to expressions. So far they have been designed statically, typically at their apex. Only full-blown expressions could be modelled. To overcome this limitation, we have defined a representation scheme that describes the temporal evolution of the expression of an emotion. It is no longer represented by a static definition but by a temporally ordered sequence of multimodal signals
    corecore