37,173 research outputs found

    From motion capture to interactive virtual worlds : towards unconstrained motion-capture algorithms for real-time performance-driven character animation

    Get PDF
    This dissertation takes performance-driven character animation as a representative application and advances motion capture algorithms and animation methods to meet its high demands. Existing approaches have either coarse resolution and restricted capture volume, require expensive and complex multi-camera systems, or use intrusive suits and controllers. For motion capture, set-up time is reduced using fewer cameras, accuracy is increased despite occlusions and general environments, initialization is automated, and free roaming is enabled by egocentric cameras. For animation, increased robustness enables the use of low-cost sensors input, custom control gesture definition is guided to support novice users, and animation expressiveness is increased. The important contributions are: 1) an analytic and differentiable visibility model for pose optimization under strong occlusions, 2) a volumetric contour model for automatic actor initialization in general scenes, 3) a method to annotate and augment image-pose databases automatically, 4) the utilization of unlabeled examples for character control, and 5) the generalization and disambiguation of cyclical gestures for faithful character animation. In summary, the whole process of human motion capture, processing, and application to animation is advanced. These advances on the state of the art have the potential to improve many interactive applications, within and outside virtual reality.Diese Arbeit befasst sich mit Performance-driven Character Animation, insbesondere werden Motion Capture-Algorithmen entwickelt um den hohen Anforderungen dieser Beispielanwendung gerecht zu werden. Existierende Methoden haben entweder eine geringe Genauigkeit und einen eingeschrĂ€nkten Aufnahmebereich oder benötigen teure Multi-Kamera-Systeme, oder benutzen störende Controller und spezielle AnzĂŒge. FĂŒr Motion Capture wird die Setup-Zeit verkĂŒrzt, die Genauigkeit fĂŒr Verdeckungen und generelle Umgebungen erhöht, die Initialisierung automatisiert, und BewegungseinschrĂ€nkung verringert. FĂŒr Character Animation wird die Robustheit fĂŒr ungenaue Sensoren erhöht, Hilfe fĂŒr benutzerdefinierte Gestendefinition geboten, und die AusdrucksstĂ€rke der Animation verbessert. Die wichtigsten BeitrĂ€ge sind: 1) ein analytisches und differenzierbares Sichtbarkeitsmodell fĂŒr Rekonstruktionen unter starken Verdeckungen, 2) ein volumetrisches Konturenmodell fĂŒr automatische Körpermodellinitialisierung in genereller Umgebung, 3) eine Methode zur automatischen Annotation von Posen und Augmentation von Bildern in großen Datenbanken, 4) das Nutzen von Beispielbewegungen fĂŒr Character Animation, und 5) die Generalisierung und Übertragung von zyklischen Gesten fĂŒr genaue Charakteranimation. Es wird der gesamte Prozess erweitert, von Motion Capture bis hin zu Charakteranimation. Die Verbesserungen sind fĂŒr viele interaktive Anwendungen geeignet, innerhalb und außerhalb von virtueller RealitĂ€t

    From motion capture to interactive virtual worlds : towards unconstrained motion-capture algorithms for real-time performance-driven character animation

    Get PDF
    This dissertation takes performance-driven character animation as a representative application and advances motion capture algorithms and animation methods to meet its high demands. Existing approaches have either coarse resolution and restricted capture volume, require expensive and complex multi-camera systems, or use intrusive suits and controllers. For motion capture, set-up time is reduced using fewer cameras, accuracy is increased despite occlusions and general environments, initialization is automated, and free roaming is enabled by egocentric cameras. For animation, increased robustness enables the use of low-cost sensors input, custom control gesture definition is guided to support novice users, and animation expressiveness is increased. The important contributions are: 1) an analytic and differentiable visibility model for pose optimization under strong occlusions, 2) a volumetric contour model for automatic actor initialization in general scenes, 3) a method to annotate and augment image-pose databases automatically, 4) the utilization of unlabeled examples for character control, and 5) the generalization and disambiguation of cyclical gestures for faithful character animation. In summary, the whole process of human motion capture, processing, and application to animation is advanced. These advances on the state of the art have the potential to improve many interactive applications, within and outside virtual reality.Diese Arbeit befasst sich mit Performance-driven Character Animation, insbesondere werden Motion Capture-Algorithmen entwickelt um den hohen Anforderungen dieser Beispielanwendung gerecht zu werden. Existierende Methoden haben entweder eine geringe Genauigkeit und einen eingeschrĂ€nkten Aufnahmebereich oder benötigen teure Multi-Kamera-Systeme, oder benutzen störende Controller und spezielle AnzĂŒge. FĂŒr Motion Capture wird die Setup-Zeit verkĂŒrzt, die Genauigkeit fĂŒr Verdeckungen und generelle Umgebungen erhöht, die Initialisierung automatisiert, und BewegungseinschrĂ€nkung verringert. FĂŒr Character Animation wird die Robustheit fĂŒr ungenaue Sensoren erhöht, Hilfe fĂŒr benutzerdefinierte Gestendefinition geboten, und die AusdrucksstĂ€rke der Animation verbessert. Die wichtigsten BeitrĂ€ge sind: 1) ein analytisches und differenzierbares Sichtbarkeitsmodell fĂŒr Rekonstruktionen unter starken Verdeckungen, 2) ein volumetrisches Konturenmodell fĂŒr automatische Körpermodellinitialisierung in genereller Umgebung, 3) eine Methode zur automatischen Annotation von Posen und Augmentation von Bildern in großen Datenbanken, 4) das Nutzen von Beispielbewegungen fĂŒr Character Animation, und 5) die Generalisierung und Übertragung von zyklischen Gesten fĂŒr genaue Charakteranimation. Es wird der gesamte Prozess erweitert, von Motion Capture bis hin zu Charakteranimation. Die Verbesserungen sind fĂŒr viele interaktive Anwendungen geeignet, innerhalb und außerhalb von virtueller RealitĂ€t

    적은 수의 ì‚Źìš©ìž ìž…ë „ìœŒëĄœë¶€í„° 읞간 동작의 합성 및 펞집

    Get PDF
    í•™ìœ„ë…ŒëŹž (ë°•ì‚Ź)-- 서욞대학ꔐ 대학원 : ì „êž°Â·ì»Ží“ší„°êł”í•™ë¶€, 2014. 8. ìŽì œíŹ.An ideal 3D character animation system can easily synthesize and edit human motion and also will provide an efficient user interface for an animator. However, despite advancements of animation systems, building effective systems for synthesizing and editing realistic human motion still remains a difficult problem. In the case of a single character, the human body is a significantly complex structure because it consists of as many as hundreds of degrees of freedom. An animator should manually adjust many joints of the human body from user inputs. In a crowd scene, many individuals in a human crowd have to respond to user inputs when an animator wants a given crowd to fit a new environment. The main goal of this thesis is to improve interactions between a user and an animation system. As 3D character animation systems are usually driven by low-dimensional inputs, there is no method for a user to directly generate a high-dimensional character animation. To address this problem, we propose a data-driven mapping model that is built by motion data obtained from a full-body motion capture system, crowd simulation, and data-driven motion synthesis algorithm. With the data-driven mapping model in hand, we can transform low-dimensional user inputs into character animation because motion data help to infer missing parts of system inputs. As motion capture data have many details and provide realism of the movement of a human, it is easier to generate a realistic character animation than without motion capture data. To demonstrate the generality and strengths of our approach, we developed two animation systems that allow the user to synthesize a single character animation in realtime and edit crowd animation via low-dimensional user inputs interactively. The first system entails controlling a virtual avatar using a small set of three-dimensional (3D) motion sensors. The second system manipulates large-scale crowd animation that consists of hundreds of characters with a small number of user constraints. Examples show that our system is much less laborious and time-consuming than previous animation systems, and thus is much more suitable for a user to generate desired character animation.Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2 Background 10 2.1 Performance Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.1 Performance-based Interfaces for Character Animation . . . . . . . 11 2.1.2 Statistical Models for Motion Synthesis . . . . . . . . . . . . . . . 12 2.1.3 Retrieval of Motion Capture Data . . . . . . . . . . . . . . . . . . 13 2.2 Crowd Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.1 Crowd Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.2 Motion Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.3 Geometry Deformation . . . . . . . . . . . . . . . . . . . . . . . . 15 3 Realtime Performance Animation Using Sparse 3D Motion Sensors 17 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3 Sensor Data and Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.4 Motion Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.1 Online Local Model . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.2 Kernel CCA-based Regression . . . . . . . . . . . . . . . . . . . . 25 3.4.3 Motion Post-processing . . . . . . . . . . . . . . . . . . . . . . . 27 3.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4 Interactive Manipulation of Large-Scale Crowd Animation 40 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 Crowd Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.3 Cage-based Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3.1 Cage Construction . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.3.2 Cage Representation . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.4 Editing Crowd Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.4.1 Spatial Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.4.2 Temporal Manipulation . . . . . . . . . . . . . . . . . . . . . . . . 57 4.5 Collision Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5 Conclusion 69 Bibliography I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XIIIDocto

    A critical assessment of the animator's artistic ownership over motion captured performances

    Get PDF
    The intention of this research report is to critically assess, as well as theoretically expand upon, the increasingly contentious area regarding character performance in computer generated (CG) animation for feature films that utilize motion capture technology. This paper specifically aims to investigate whether or not the use of motion capture in live-action visual effects, in the pursuit of creating CG characters that are as realistic as possible, has eroded the artistic autonomy of character animators and their artistic ownership of the performances of these characters. Through the analysis and comparison of pertinent case studies, it should become apparent that this perception is not absolute, and is largely dependent on the kinds of characters intended to be portrayed and the kind of film that they are to be portrayed in. It will be shown that motion capture can be a very effective collaborative tool not only in the relationship between directors and actors, but also between animators and actors under the creative supervision of directors

    Investigating facial animation production through artistic inquiry

    Get PDF
    Studies into dynamic facial expressions tend to make use of experimental methods based on objectively manipulated stimuli. New techniques for displaying increasingly realistic facial movement and methods of measuring observer responses are typical of computer animation and psychology facial expression research. However, few projects focus on the artistic nature of performance production. Instead, most concentrate on the naturalistic appearance of posed or acted expressions. In this paper, the authors discuss a method for exploring the creative process of emotional facial expression animation, and ask whether anything can be learned about authentic dynamic expressions through artistic inquiry

    Capture, Learning, and Synthesis of 3D Speaking Styles

    Full text link
    Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.Comment: To appear in CVPR 201

    Semi-Supervised Facial Animation Retargeting

    Get PDF
    This paper presents a system for facial animation retargeting that al- lows learning a high-quality mapping between motion capture data and arbitrary target characters. We address one of the main chal- lenges of existing example-based retargeting methods, the need for a large number of accurate training examples to define the corre- spondence between source and target expression spaces. We show that this number can be significantly reduced by leveraging the in- formation contained in unlabeled data, i.e. facial expressions in the source or target space without corresponding poses. In contrast to labeled samples that require time-consuming and error-prone manual character posing, unlabeled samples are easily obtained as frames of motion capture recordings or existing animations of the target character. Our system exploits this information by learning a shared latent space between motion capture and character param- eters in a semi-supervised manner. We show that this approach is resilient to noisy input and missing data and significantly improves retargeting accuracy. To demonstrate its applicability, we integrate our algorithm in a performance-driven facial animation system
    • 

    corecore