15 research outputs found

    Principal Geodesic Dynamics

    Get PDF
    International audienceThis paper presents a new integration of a data-driven approach using dimension reduction and a physically-based simulation for real-time character animation. We exploit Lie group statistical analysis techniques (Principal Geodesic Analysis, PGA) to approximate the pose manifold of a motion capture sequence by a reduced set of pose geodesics. We integrate this kinematic parametrization into a physically-based animation approach of virtual characters, by using the PGA-reduced parametrization directly as generalized coordinates of a Lagrangian formulation of mechanics. In order to achieve real-time without sacrificing stability, we derive an explicit time integrator by approximating existing variational integrators. Finally, we test our approach in task-space motion control. By formulating both physical simulation and inverse kinematics time stepping schemes as two quadratic programs, we propose a features-based control algorithm that interpolates between the two metrics. This allows for an intuitive trade-off between realistic physical simulation and controllable kinematic manipulation

    DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics

    Full text link
    Synthesizing realistic human movements, dynamically responsive to the environment, is a long-standing objective in character animation, with applications in computer vision, sports, and healthcare, for motion prediction and data augmentation. Recent kinematics-based generative motion models offer impressive scalability in modeling extensive motion data, albeit without an interface to reason about and interact with physics. While simulator-in-the-loop learning approaches enable highly physically realistic behaviors, the challenges in training often affect scalability and adoption. We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics. DROP can be viewed as a highly stable, minimalist physics-based human simulator that interfaces with a kinematics-based generative motion prior. Utilizing projective dynamics, DROP allows flexible and simple integration of the learned motion prior as one of the projective energies, seamlessly incorporating control provided by the motion prior with Newtonian dynamics. Serving as a model-agnostic plug-in, DROP enables us to fully leverage recent advances in generative motion models for physics-based motion synthesis. We conduct extensive evaluations of our model across different motion tasks and various physical perturbations, demonstrating the scalability and diversity of responses.Comment: SIGGRAPH Asia 2023, Video https://youtu.be/tF5WW7qNMLI, Website: https://stanford-tml.github.io/drop

    컴퓨터를 활용한 여러 사람의 동작 연출

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 전기·컴퓨터공학부, 2017. 8. 이제희.Choreographing motion is the process of converting written stories or messages into the real movement of actors. In performances or movie, directors spend a consid-erable time and effort because it is the primary factor that audiences concentrate. If multiple actors exist in the scene, choreography becomes more challenging. The fundamental difficulty is that the coordination between actors should precisely be ad-justed. Spatio-temporal coordination is the first requirement that must be satisfied, and causality/mood are also another important coordinations. Directors use several assistant tools such as storyboards or roughly crafted 3D animations, which can visu-alize the flow of movements, to organize ideas or to explain them to actors. However, it is difficult to use the tools because artistry and considerable training effort are required. It also doesnt have ability to give any suggestions or feedbacks. Finally, the amount of manual labor increases exponentially as the number of actor increases. In this thesis, we propose computational approaches on choreographing multiple actor motion. The ultimate goal is to enable novice users easily to generate motions of multiple actors without substantial effort. We first show an approach to generate motions for shadow theatre, where actors should carefully collaborate to achieve the same goal. The results are comparable to ones that are made by professional ac-tors. In the next, we present an interactive animation system for pre-visualization, where users exploits an intuitive graphical interface for scene description. Given a de-scription, the system can generate motions for the characters in the scene that match the description. Finally, we propose two controller designs (combining regression with trajectory optimization, evolutionary deep reinforcement learning) for physically sim-ulated actors, which guarantee physical validity of the resultant motions.Chapter 1 Introduction 1 Chapter 2 Background 8 2.1 Motion Generation Technique 9 2.1.1 Motion Editing and Synthesis for Single-Character 9 2.1.2 Motion Editing and Synthesis for Multi-Character 9 2.1.3 Motion Planning 10 2.1.4 Motion Control by Reinforcement Learning 11 2.1.5 Pose/Motion Estimation from Incomplete Information 11 2.1.6 Diversity on Resultant Motions 12 2.2 Authoring System 12 2.2.1 System using High-level Input 12 2.2.2 User-interactive System 13 2.3 Shadow Theatre 14 2.3.1 Shadow Generation 14 2.3.2 Shadow for Artistic Purpose 14 2.3.3 Viewing Shadow Theatre as Collages/Mosaics of People 15 2.4 Physics-based Controller Design 15 2.4.1 Controllers for Various Characters 15 2.4.2 Trajectory Optimization 15 2.4.3 Sampling-based Optimization 16 2.4.4 Model-Based Controller Design 16 2.4.5 Direct Policy Learning 17 2.4.6 Deep Reinforcement Learning for Control 17 Chapter 3 Motion Generation for Shadow Theatre 19 3.1 Overview 19 3.2 Shadow Theatre Problem 21 3.2.1 Problem Definition 21 3.2.2 Approaches of Professional Actors 22 3.3 Discovery of Principal Poses 24 3.3.1 Optimization Formulation 24 3.3.2 Optimization Algorithm 27 3.4 Animating Principal Poses 29 3.4.1 Initial Configuration 29 3.4.2 Optimization for Motion Generation 30 3.5 Experimental Results 32 3.5.1 Implementation Details 33 3.5.2 Animation 34 3.5.3 3D Fabrication 34 3.6 Discussion 37 Chapter 4 Interactive Animation System for Pre-visualization 40 4.1 Overview 40 4.2 Graphical Scene Description 42 4.3 Candidate Scene Generation 45 4.3.1 Connecting Paths 47 4.3.2 Motion Cascade 47 4.3.3 Motion Selection For Each Cycle 49 4.3.4 Cycle Ordering 51 4.3.5 Generalized Paths and Cycles 52 4.3.6 Motion Editing 54 4.4 Scene Ranking 54 4.4.1 Ranking Criteria 54 4.4.2 Scene Ranking Measures 57 4.5 Scene Refinement 58 4.6 Experimental Results 62 4.7 Discussion 65 Chapter 5 Physics-based Design and Control 69 5.1 Overview 69 5.2 Combining Regression with Trajectory Optimization 70 5.2.1 Simulation and Motor Skills 71 5.2.2 Control Adaptation 75 5.2.3 Control Parameterization 79 5.2.4 Efficient Construction 81 5.2.5 Experimental Results 84 5.2.6 Discussion 89 5.3 Example-Guided Control by Deep Reinforcement Learning 91 5.3.1 System Overview 92 5.3.2 Initial Policy Construction 95 5.3.3 Evolutionary Deep Q-Learning 100 5.3.4 Experimental Results 107 5.3.5 Discussion 114 Chapter 6 Conclusion 119 6.1 Contribution 119 6.2 Future Work 120 요약 135Docto

    Framework for embedding physical systems into virtual experiences

    Get PDF
    We present an immersive Virtual Reality (VR) experience through a combination of technologies including a physical rig; a gamelike experience; and a reï¬ned physics model with control. At its heart, the core technology introduces the concept of a physics-based communication that allows force-driven interaction to be shared between the player and game entities in the virtual world. Because the framework is generic and extendable, the application supports a myriad of interaction modes, constrained only by the limitations of the physical rig (see Figure 1). To showcase the technology, we demonstrate a locomoting robot placed in an immersive gamelike setting

    동영상 속 사람 동작의 물리 기반 재구성 및 분석

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 컴퓨터공학부, 2021. 2. 이제희.In computer graphics, simulating and analyzing human movement have been interesting research topics started since the 1960s. Still, simulating realistic human movements in a 3D virtual world is a challenging task in computer graphics. In general, motion capture techniques have been used. Although the motion capture data guarantees realistic result and high-quality data, there is lots of equipment required to capture motion, and the process is complicated. Recently, 3D human pose estimation techniques from the 2D video are remarkably developed. Researchers in computer graphics and computer vision have attempted to reconstruct the various human motions from video data. However, existing methods can not robustly estimate dynamic actions and not work on videos filmed with a moving camera. In this thesis, we propose methods to reconstruct dynamic human motions from in-the-wild videos and to control the motions. First, we developed a framework to reconstruct motion from videos using prior physics knowledge. For dynamic motions such as backspin, the poses estimated by a state-of-the-art method are incomplete and include unreliable root trajectory or lack intermediate poses. We designed a reward function using poses and hints extracted from videos in the deep reinforcement learning controller and learned a policy to simultaneously reconstruct motion and control a virtual character. Second, we simulated figure skating movements in video. Skating sequences consist of fast and dynamic movements on ice, hindering the acquisition of motion data. Thus, we extracted 3D key poses from a video to then successfully replicate several figure skating movements using trajectory optimization and a deep reinforcement learning controller. Third, we devised an algorithm for gait analysis through video of patients with movement disorders. After acquiring the patients joint positions from 2D video processed by a deep learning network, the 3D absolute coordinates were estimated, and gait parameters such as gait velocity, cadence, and step length were calculated. Additionally, we analyzed the optimization criteria of human walking by using a 3D musculoskeletal humanoid model and physics-based simulation. For two criteria, namely, the minimization of muscle activation and joint torque, we compared simulation data with real human data for analysis. To demonstrate the effectiveness of the first two research topics, we verified the reconstruction of dynamic human motions from 2D videos using physics-based simulations. For the last two research topics, we evaluated our results with real human data.컴퓨터 그래픽스에서 인간의 움직임 시뮬레이션 및 분석은 1960 년대부터 다루어진 흥미로운 연구 주제이다. 몇 십년 동안 활발하게 연구되어 왔음에도 불구하고, 3차원 가상 공간 상에서 사실적인 인간의 움직임을 시뮬레이션하는 연구는 여전히 어렵고 도전적인 주제이다. 그동안 사람의 움직임 데이터를 얻기 위해서 모션 캡쳐 기술이 사용되어 왔다. 모션 캡처 데이터는 사실적인 결과와 고품질 데이터를 보장하지만 모션 캡쳐를 하기 위해서 필요한 장비들이 많고, 그 과정이 복잡하다. 최근에 2차원 영상으로부터 사람의 3차원 자세를 추정하는 연구들이 괄목할 만한 결과를 보여주고 있다. 이를 바탕으로 컴퓨터 그래픽스와 컴퓨터 비젼 분야의 연구자들은 비디오 데이터로부터 다양한 인간 동작을 재구성하려는 시도를 하고 있다. 그러나 기존의 방법들은 빠르고 다이나믹한 동작들은 안정적으로 추정하지 못하며 움직이는 카메라로 촬영한 비디오에 대해서는 작동하지 않는다. 본 논문에서는 비디오로부터 역동적인 인간 동작을 재구성하고 동작을 제어하는 방법을 제안한다. 먼저 사전 물리학 지식을 사용하여 비디오에서 모션을 재구성하는 프레임 워크를 제안한다. 공중제비와 같은 역동적인 동작들에 대해서 최신 연구 방법을 동원하여 추정된 자세들은 캐릭터의 궤적을 신뢰할 수 없거나 중간에 자세 추정에 실패하는 등 불완전하다. 우리는 심층강화학습 제어기에서 영상으로부터 추출한 포즈와 힌트를 활용하여 보상 함수를 설계하고 모션 재구성과 캐릭터 제어를 동시에 하는 정책을 학습하였다. 둘 째, 비디오에서 피겨 스케이팅 기술을 시뮬레이션한다. 피겨 스케이팅 기술들은 빙상에서 빠르고 역동적인 움직임으로 구성되어 있어 모션 데이터를 얻기가 까다롭다. 비디오에서 3차원 키 포즈를 추출하고 궤적 최적화 및 심층강화학습 제어기를 사용하여 여러 피겨 스케이팅 기술을 성공적으로 시연한다. 셋 째, 파킨슨 병이나 뇌성마비와 같은 질병으로 인하여 움직임 장애가 있는 환자의 보행을 분석하기 위한 알고리즘을 제안한다. 2차원 비디오로부터 딥러닝을 사용한 자세 추정기법을 사용하여 환자의 관절 위치를 얻어낸 다음, 3차원 절대 좌표를 얻어내어 이로부터 보폭, 보행 속도와 같은 보행 파라미터를 계산한다. 마지막으로, 근골격 인체 모델과 물리 시뮬레이션을 이용하여 인간 보행의 최적화 기준에 대해 탐구한다. 근육 활성도 최소화와 관절 돌림힘 최소화, 두 가지 기준에 대해 시뮬레이션한 후, 실제 사람 데이터와 비교하여 결과를 분석한다. 처음 두 개의 연구 주제의 효과를 입증하기 위해, 물리 시뮬레이션을 사용하여 이차원 비디오로부터 재구성한 여러 가지 역동적인 사람의 동작들을 재현한다. 나중 두 개의 연구 주제는 사람 데이터와의 비교 분석을 통하여 평가한다.1 Introduction 1 2 Background 9 2.1 Pose Estimation from 2D Video . . . . . . . . . . . . . . . . . . . . 9 2.2 Motion Reconstruction from Monocular Video . . . . . . . . . . . . 10 2.3 Physics-Based Character Simulation and Control . . . . . . . . . . . 12 2.4 Motion Reconstruction Leveraging Physics . . . . . . . . . . . . . . 13 2.5 Human Motion Control . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.5.1 Figure Skating Simulation . . . . . . . . . . . . . . . . . . . 16 2.6 Objective Gait Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.7 Optimization for Human Movement Simulation . . . . . . . . . . . . 17 2.7.1 Stability Criteria . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Human Dynamics from Monocular Video with Dynamic Camera Movements 19 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3 Pose and Contact Estimation . . . . . . . . . . . . . . . . . . . . . . 21 3.4 Learning Human Dynamics . . . . . . . . . . . . . . . . . . . . . . . 24 3.4.1 Policy Learning . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4.2 Network Training . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4.3 Scene Estimator . . . . . . . . . . . . . . . . . . . . . . . . 29 3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5.1 Video Clips . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5.2 Comparison of Contact Estimators . . . . . . . . . . . . . . . 33 3.5.3 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.5.4 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4 Figure Skating Simulation from Video 42 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.3 Skating Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3.1 Non-holonomic Constraints . . . . . . . . . . . . . . . . . . 46 4.3.2 Relaxation of Non-holonomic Constraints . . . . . . . . . . . 47 4.4 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.5 Trajectory Optimization and Control . . . . . . . . . . . . . . . . . . 50 4.5.1 Trajectory Optimization . . . . . . . . . . . . . . . . . . . . 50 4.5.2 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5 Gait Analysis Using Pose Estimation Algorithm with 2D-video of Patients 61 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.2.1 Patients and video recording . . . . . . . . . . . . . . . . . . 63 5.2.2 Standard protocol approvals, registrations, and patient consents 66 5.2.3 3D Pose estimation from 2D video . . . . . . . . . . . . . . . 66 5.2.4 Gait parameter estimation . . . . . . . . . . . . . . . . . . . 67 5.2.5 Statistical analysis . . . . . . . . . . . . . . . . . . . . . . . 68 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5.3.1 Validation of video-based analysis of the gait . . . . . . . . . 68 5.3.2 gait analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.4.1 Validation with the conventional sensor-based method . . . . 75 5.4.2 Analysis of gait and turning in TUG . . . . . . . . . . . . . . 75 5.4.3 Correlation with clinical parameters . . . . . . . . . . . . . . 76 5.4.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.5 Supplementary Material . . . . . . . . . . . . . . . . . . . . . . . . . 77 6 Control Optimization of Human Walking 80 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.2.1 Musculoskeletal model . . . . . . . . . . . . . . . . . . . . . 82 6.2.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.2.3 Control co-activation level . . . . . . . . . . . . . . . . . . . 83 6.2.4 Push-recovery experiment . . . . . . . . . . . . . . . . . . . 84 6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7 Conclusion 90 7.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Docto

    Pre-computation for controlling character behavior in interactive physical simulations

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 129-136).The development of advanced computer animation tools has allowed talented artists to create digital actors, or characters, in films and commercials that move in a plausible and compelling way. In interactive applications, however, the artist does not have total control over the scenarios the character will experience. Unexpected changes in the environment of the character or unexpected interactions with dynamic elements of the virtual world can lead to implausible motions. This work investigates the use of physical simulation to automatically synthesize plausible character motions in interactive applications. We show how to simulate a realistic motion for a humanoid character by creating a feedback controller that tracks a motion capture recording. By applying the right forces at the right time, the controller is able to recover from a range of interesting changes to the environment and unexpected disturbances. Controlling physically simulated humanoid characters is non-trivial as they are governed by non-linear, non-smooth, and high-dimensional equations of motion. We simplify the problem by using a linearized and simplified dynamics model near a reference trajectory. Tracking a reference trajectory is an effective way of getting a character to perform a single task. However, simulated characters need to perform many tasks form a variety of possible configurations. This work also describes a method for combining existing controllers by adding their output forces to perform new tasks. This allows one to reuse existing controllers. A surprising fact is that combined controllers can perform optimally under certain conditions. These methods allow us to interactively simulate many interesting humanoid character behaviors in two and three dimensions. These characters have many more degrees of freedom than typical robot systems and move much more naturally. Simulation is fast enough that the controllers could soon be used to animate characters in interactive games. It is also possible that these simulations could be used to test robotic designs and biomechanical hypotheses.by Marco Jorge Tome da Silva.Ph.D

    Control of objects with a high degree of freedom

    Get PDF
    In this thesis, I present novel strategies for controlling objects with high degrees of freedom for the purpose of robotic control and computer animation, including articulated objects such as human bodies or robots and deformable objects such as ropes and cloth. Such control is required for common daily movements such as folding arms, tying ropes, wrapping objects and putting on clothes. Although there is demand in computer graphics and animation for generating such scenes, little work has targeted these problems. The difficulty of solving such problems are due to the following two factors: (1) The complexity of the planning algorithms: The computational costs of the methods that are currently available increase exponentially with respect to the degrees of freedom of the objects and therefore they cannot be applied for full human body structures, ropes and clothes . (2) Lack of abstract descriptors for complex tasks. Models for quantitatively describing the progress of tasks such as wrapping and knotting are absent for animation generation. In this work, we employ the concept of a task-centric manifold to quantitatively describe complex tasks, and incorporate a bi-mapping scheme to bridge this manifold and the configuration space of the controlled objects, called an object-centric manifold. The control problem is solved by first projecting the controlled object onto the task-centric manifold, then getting the next ideal state of the scenario by local planning, and finally projecting the state back to the object-centric manifold to get the desirable state of the controlled object. Using this scheme, complex movements that previously required global path planning can be synthesised by local path planning. Under this framework, we show the applications in various fields. An interpolation algorithm for arbitrary postures of human character is first proposed. Second, a control scheme is suggested in generating Furoshiki wraps with different styles. Finally, new models and planning methods are given for quantitatively control for wrapping/ unwrapping and dressing/undressing problems

    Dataohjattu sekventiaalinen Monte Carlo -liikesynteesi

    Get PDF
    Animation in video games is composed of motion segments created by animators, and of motion synthesis methods, which combine and extend the motion segments for emerging gameplay situations. Current video games typically synthesize motion kinematically with no regard to dynamics, causing immersion-breaking motion artifacts. By contrast, physically-based methods synthesize motions by simulating physics, which ensures physical correctness. This thesis extends sequential Monte Carlo motion synthesis, a physically-based method, to use animator-authored reference animations for guiding the synthesis. An offline component is developed, which robustly tracks various types of kinematic reference animations by controlling a simulated physical character. The tracking results are gathered as a training set for a machine learning component, which directs the sequential Monte Carlo sampling used for online motion synthesis. For machine learning, the approximate nearest neighbors, locally weighted regression, mixture of regressors, and self-organizing map methods are implemented and compared. A product distribution sampling scheme is developed to efficiently combine machine learning with optimization. Additionally, a factorized formulation of the learning problem is presented and implemented. The system is evaluated with an interactive locomotion test case. Given a single kinematic reference animation depicting running in a straight line, the system is able to synthesize physically-valid motion for turning and running on uneven terrain.Videopelien animaatio muodostuu animaattoreiden luomista animaatioista, sekä liikesynteesimenetelmistä, jotka yhdistävät ja laajentavat luotuja animaatioita pelissä syntyviin uusiin tilanteisiin. Nykyiset videopelit käyttävät pääsääntöisesti menetelmiä, jotka syntetisoivat liikettä kinemaattisesti huomioimatta dynamiikkaa, mikä johtaa immersiota heikentäviin virheisiin. Vaihtoehtoisesti liikesynteesiin voidaan käyttää fysiikkaan perustuvia menetelmiä, joissa fysiikan simuloinnilla varmistetaan liikkeiden fysikaalinen toteutettavuus. Tämä diplomityö laajentaa fysiikkaan perustuvaa sekventiaalista Monte Carlo -liikesynteesimenetelmää ohjaamalla synteesiä animaattoreiden luomilla referenssianimaatioilla. Työssä kehitetään erillinen komponentti, joka kykenee seuraamaan monenlaisia kinemaattisia referenssianimaatioita kontrolloimalla simuloitua fysikaalista hahmomallia. Seurannan tulokset kootaan opetusdataksi koneoppimiskomponentissa, joka ohjaa interaktiiviseen liikesynteesiin käytettävää sekventiaalista Monte Carlo -otantaa. Koneoppimiseen sovelletaan approksimatiivista lähimmän naapurin menetelmää, paikallisesti painotettua regressiota, regressorisekoitemallia ja itseorganisoituvaa karttaa. Koneoppiminen yhdistetään tehokkaasti optimointiin käyttämällä otantaa todennäköisyysjakaumien tulosta. Oppimisongelmaan sovelletaan myös tekijöihin jaettua muotoa. Järjestelmää arvioidaan interaktiivisella demonstraatiolla, jossa käytetään yksittäistä suoraa juoksua esittävää kinemaattista referenssianimaatiota. Järjestelmä kykenee syntetisoimaan referenssin avulla käännöksiä ja juoksua epätasaisella pinnalla
    corecore