39,332 research outputs found

    A Process for the Semi-Automated Generation of Life-Sized, Interactive 3D Character Models for Holographic Projection

    Get PDF
    By mixing digital data into the real world, Augmented Reality (AR) can deliver potent immersive and interactive experience to its users. In many application contexts, this requires the capability to deploy animated, high fidelity 3D character models. In this paper, we propose a novel approach to efficiently transform – using 3D scanning – an actor to a photorealistic, animated character. This generated 3D assistant must be able to move to perform recorded motion capture data, and it must be able to generate dialogue with lip sync to naturally interact with the users. The approach we propose for creating these virtual AR assistants utilizes photogrammetric scanning, motion capture, and free viewpoint video for their integration in Unity. We deploy the Occipital Structure sensor to acquire static high-resolution textured surfaces, and a Vicon motion capture system to track series of movements. The proposed capturing process consists of the steps scanning, reconstruction with Wrap 3 and Maya, editing texture maps to reduce artefacts with Photoshop, and rigging with Maya and Motion Builder to render the models fit for animation and lip-sync using LipSyncPro. We test the approach in Unity by scanning two human models with 23 captured animations each. Our findings indicate that the major factors affecting the result quality are environment setup, lighting, and processing constraints

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    적은 수의 사용자 입력으로부터 인간 동작의 합성 및 편집

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 이제희.An ideal 3D character animation system can easily synthesize and edit human motion and also will provide an efficient user interface for an animator. However, despite advancements of animation systems, building effective systems for synthesizing and editing realistic human motion still remains a difficult problem. In the case of a single character, the human body is a significantly complex structure because it consists of as many as hundreds of degrees of freedom. An animator should manually adjust many joints of the human body from user inputs. In a crowd scene, many individuals in a human crowd have to respond to user inputs when an animator wants a given crowd to fit a new environment. The main goal of this thesis is to improve interactions between a user and an animation system. As 3D character animation systems are usually driven by low-dimensional inputs, there is no method for a user to directly generate a high-dimensional character animation. To address this problem, we propose a data-driven mapping model that is built by motion data obtained from a full-body motion capture system, crowd simulation, and data-driven motion synthesis algorithm. With the data-driven mapping model in hand, we can transform low-dimensional user inputs into character animation because motion data help to infer missing parts of system inputs. As motion capture data have many details and provide realism of the movement of a human, it is easier to generate a realistic character animation than without motion capture data. To demonstrate the generality and strengths of our approach, we developed two animation systems that allow the user to synthesize a single character animation in realtime and edit crowd animation via low-dimensional user inputs interactively. The first system entails controlling a virtual avatar using a small set of three-dimensional (3D) motion sensors. The second system manipulates large-scale crowd animation that consists of hundreds of characters with a small number of user constraints. Examples show that our system is much less laborious and time-consuming than previous animation systems, and thus is much more suitable for a user to generate desired character animation.Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2 Background 10 2.1 Performance Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.1 Performance-based Interfaces for Character Animation . . . . . . . 11 2.1.2 Statistical Models for Motion Synthesis . . . . . . . . . . . . . . . 12 2.1.3 Retrieval of Motion Capture Data . . . . . . . . . . . . . . . . . . 13 2.2 Crowd Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.1 Crowd Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.2 Motion Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.3 Geometry Deformation . . . . . . . . . . . . . . . . . . . . . . . . 15 3 Realtime Performance Animation Using Sparse 3D Motion Sensors 17 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3 Sensor Data and Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.4 Motion Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.1 Online Local Model . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.2 Kernel CCA-based Regression . . . . . . . . . . . . . . . . . . . . 25 3.4.3 Motion Post-processing . . . . . . . . . . . . . . . . . . . . . . . 27 3.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4 Interactive Manipulation of Large-Scale Crowd Animation 40 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 Crowd Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.3 Cage-based Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3.1 Cage Construction . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.3.2 Cage Representation . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.4 Editing Crowd Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.4.1 Spatial Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.4.2 Temporal Manipulation . . . . . . . . . . . . . . . . . . . . . . . . 57 4.5 Collision Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5 Conclusion 69 Bibliography I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XIIIDocto

    Nonlinear dance motion analysis and motion editing using Hilbert-Huang transform

    Full text link
    Human motions (especially dance motions) are very noisy, and it is hard to analyze and edit the motions. To resolve this problem, we propose a new method to decompose and modify the motions using the Hilbert-Huang transform (HHT). First, HHT decomposes a chromatic signal into "monochromatic" signals that are the so-called Intrinsic Mode Functions (IMFs) using an Empirical Mode Decomposition (EMD) [6]. After applying the Hilbert Transform to each IMF, the instantaneous frequencies of the "monochromatic" signals can be obtained. The HHT has the advantage to analyze non-stationary and nonlinear signals such as human-joint-motions over FFT or Wavelet transform. In the present paper, we propose a new framework to analyze and extract some new features from a famous Japanese threesome pop singer group called "Perfume", and compare it with Waltz and Salsa dance. Using the EMD, their dance motions can be decomposed into motion (choreographic) primitives or IMFs. Therefore we can scale, combine, subtract, exchange, and modify those IMFs, and can blend them into new dance motions self-consistently. Our analysis and framework can lead to a motion editing and blending method to create a new dance motion from different dance motions.Comment: 6 pages, 10 figures, Computer Graphics International 2017, Conference short pape

    Splicing of concurrent upper-body motion spaces with locomotion

    Get PDF
    In this paper, we present a motion splicing technique for generating concurrent upper-body actions occurring simultaneously with the evolution of a lower-body locomotion sequence. Specifically, we show that a layered interpolation motion model generates upper-body poses while assigning different actions to each upper-body part. Hence, in the proposed motion splicing approach, it is possible to increase the number of generated motions as well as the number of desired actions that can be performed by virtual characters. Additionally, we propose an iterative motion blending solution, inverse pseudo-blending, to maintain a smooth and natural interaction between the virtual character and the virtual environment; inverse pseudo-blending is a constraint-based motion editing technique that blends the motions enclosed in a tetrahedron by minimising the distances between the end-effector positions of the actual and blended motions. Additionally, to evaluate the proposed solution, we implemented an example-based application for interactive motion splicing based on specified constraints. Finally, the generated results show that the proposed solution can be beneficially applied to interactive applications where concurrent actions of the upper-body are desired

    A semantic feature for human motion retrieval

    Get PDF
    With the explosive growth of motion capture data, it becomes very imperative in animation production to have an efficient search engine to retrieve motions from large motion repository. However, because of the high dimension of data space and complexity of matching methods, most of the existing approaches cannot return the result in real time. This paper proposes a high level semantic feature in a low dimensional space to represent the essential characteristic of different motion classes. On the basis of the statistic training of Gauss Mixture Model, this feature can effectively achieve motion matching on both global clip level and local frame level. Experiment results show that our approach can retrieve similar motions with rankings from large motion database in real-time and also can make motion annotation automatically on the fly. Copyright © 2013 John Wiley & Sons, Ltd

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    An interactive and multi-level framework for summarising user generated videos

    Get PDF
    We present an interactive and multi-level abstraction framework for user-generated video (UGV) summarisation, allowing a user the flexibility to select a summarisation criterion out of a number of methods provided by the system. First, a given raw video is segmented into shots, and each shot is further decomposed into sub-shots in line with the change in dominant camera motion. Secondly, principal component analysis (PCA) is applied to the colour representation of the collection of sub-shots, and a content map is created using the first few components. Each sub-shot is represented with a ``footprint'' on the content map, which reveals its content significance (coverage) and the most dynamic segment. The final stage of abstraction is devised in a user-assisted manner whereby a user is able to specify a desired summary length, with options to interactively perform abstraction at different granularity of visual comprehension. The results obtained show the potential benefit in significantly alleviating the burden of laborious user intervention associated with conventional video editing/browsing
    corecore