19 research outputs found

    Intuitive Generation of Realistic Motions for Articulated Human Characters

    No full text
    A long-standing goal in computer graphics is to create and control realistic motion for virtual human characters. Despite the progress made over the last decade, it remains challenging to design a system that allows a random user to intuitively create and control life-like human motions. This dissertation focuses on exploring theory, algorithms and applications that enable novice users to quickly and easily create and control natural-looking motions, including both full-body movement and hand articulations, for human characters. More specifically, the goals of this research are: (1) to investigate generative statistical models and physics-based dynamic models to precisely predict how humans move and (2) to demonstrate the utility of our motion models in a wide range of applications including motion analysis, synthesis, editing and acquisition. We have developed two novel generative statistical models from prerecorded motion data and show their promising applications in real time motion editing, online motion control, offline animation design, and motion data processing. In addition, we have explored how to model subtle contact phenomena for dexterous hand grasping and manipulation using physics-based dynamic models. We show for the first time how to capture physically realistic hand manipulation data from ambiguous image data obtained by video cameras

    Synthesis and editing of personalized stylistic human motion

    No full text
    Figure 1: Motion style synthesis and retargeting: (top) after observing an unknown actor performing one walking style, we can synthesize other walking styles for the same actor; (bottom) we can transfer the walking style from one actor to another. This paper presents a generative human motion model for synthesis, retargeting, and editing of personalized human motion styles. We first record a human motion database from multiple actors performing a wide variety of motion styles for particular actions. We then apply multilinear analysis techniques to construct a generative motion model of the form x = g(a, e) for particular human actions, where the parameters a and e control “identity ” and “style” variations of the motion x respectively. The new modular representation naturally supports motion generalization to new actors and/or styles. We demonstrate the power and flexibility of the multilinear motion models by synthesizing personalized stylistic human motion and transferring the stylistic motions from one actor to another. We also show the effectiveness of our model by editing stylistic motion in style and/or identity space
    corecore