16 research outputs found

    Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis

    Get PDF
    International audienceWe present a method for easily drafting expressive character animation by playing with instrumented rigid objects. We parse the input 6D trajectories (position and orientation over time)-called spatial motion doodles-into sequences of actions and convert them into detailed character animations using a dataset of parameterized motion clips which are automatically fitted to the doodles in terms of global trajectory and timing. Moreover, we capture the expres-siveness of user-manipulation by analyzing Laban effort qualities in the input spatial motion doodles and transferring them to the synthetic motions we generate. We validate the ease of use of our system and the expressiveness of the resulting animations through a series of user studies, showing the interest of our approach for interactive digital storytelling applications dedicated to children and non-expert users, as well as for providing fast drafting tools for animators

    Robust Motion In-betweening

    Full text link
    In this work we present a novel, robust transition generation technique that can serve as a new tool for 3D animators, based on adversarial recurrent neural networks. The system synthesizes high-quality motions that use temporally-sparse keyframes as animation constraints. This is reminiscent of the job of in-betweening in traditional animation pipelines, in which an animator draws motion frames between provided keyframes. We first show that a state-of-the-art motion prediction model cannot be easily converted into a robust transition generator when only adding conditioning information about future keyframes. To solve this problem, we then propose two novel additive embedding modifiers that are applied at each timestep to latent representations encoded inside the network's architecture. One modifier is a time-to-arrival embedding that allows variations of the transition length with a single model. The other is a scheduled target noise vector that allows the system to be robust to target distortions and to sample different transitions given fixed keyframes. To qualitatively evaluate our method, we present a custom MotionBuilder plugin that uses our trained model to perform in-betweening in production scenarios. To quantitatively evaluate performance on transitions and generalizations to longer time horizons, we present well-defined in-betweening benchmarks on a subset of the widely used Human3.6M dataset and on LaFAN1, a novel high quality motion capture dataset that is more appropriate for transition generation. We are releasing this new dataset along with this work, with accompanying code for reproducing our baseline results.Comment: Published at SIGGRAPH 202

    사람 동작 생성을 위한 의미 분석

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 이제희.One of main goals of computer-generated character animation is to reduce cost to create animated scenes. Using human motion in makes it easier to animate characters, so motion capture technology is used as a standard technique. However, it is difficult to get the desired motion because it requires a large space, high-performance cameras, actors, and a significant amount of work for post-processing. Data-driven character animation includes a set of techniques that make effective use of captured motion data. In this thesis, I introduce methods that analyze the semantics of motion data to enhance the utilization of the data. To accomplish this, various techniques in other fields are integrated so that we can understand the semantics of a unit motion clip, the implicit structure of a motion sequence, and a natural description of movements. Based upon that understanding, we can generate new animation systems. The first animation system in this thesis allows the user to generate an animation of basketball play from the tactics board. In order to handle complex basketball rule that players must follow, we use context-free grammars for motion representation. Our motion grammar enables the user to define implicit/explicit rules of human behavior and generates valid movement of basketball players. Interactions between players or between players and the environment are represented with semantic rules, which results in plausible animation. When we compose motion sequences, we rely on motion corpus storing the prepared motion clips and the transition between them. It is important to construct good motion corpus to create natural and rich animations, but it requires the efforts of experts. We introduce a semi-supervised learning technique for automatic generation of motion corpus. Stacked autoencoders are used to find latent features for large amounts of motion capture data and the features are used to effectively discover worthwhile motion clips. The other animation system uses natural language processing technology to understand the meaning of the animated scene that the user wants to make. Specifically, the script of an animated scene is used to synthesize the movements of characters. Like the sketch interface, scripts are very sparse input sources. Understanding motion allows the system to interpret abstract user input and generate scenes that meet user needs.1 Introduction 1 2 Background 8 2.1 RepresentationofHumanMovements 8 2.2 MotionAnnotation 11 2.3 MotionGrammars 12 2.4 NaturalLanguageProcessing 15 3 Motion Grammar 17 3.1 Overview 18 3.2 MotionGrammar 20 3.2.1 Instantiation, Semantics, and Plausibility 22 3.2.2 ASimpleExample 25 3.3 BasketballTacticsBoard 27 3.4 MotionSynthesis 29 3.5 Results 35 3.6 Discussion 39 4 Motion Embedding 49 4.1 Overview 50 4.2 MotionData 51 4.3 Autoencoders 52 4.3.1 Stackedautoencoders 53 4.4 MotionCorpus 53 4.4.1 Training 53 4.4.2 FindingMotionClips 55 4.5 Results 55 4.6 Discussion 57 5 Text to Animation 62 5.1 Overview 63 5.2 UnderstandingSemantics 64 5.3 ActionChains 65 5.3.1 WordEmbedding 66 5.3.2 MotionPlausibility 67 5.4 SceneGeneration 69 5.5 Results 70 5.6 Discussion 70 6 Conclusion 74 Bibliography 76 초록 100Docto
    corecore