11 research outputs found

    A fast numerical solver for local barycentric coordinates

    Get PDF
    The local barycentric coordinates (LBC), proposed in Zhang et al (2014), demonstrate good locality and can be used for local control on function value interpolation and shape deformation. However, it has no closed- form expression and must be computed by solving an optimization problem, which can be time-consuming especially for high-resolution models. In this paper, we propose a new technique to compute LBC efficiently. The new solver is developed based on two key insights. First, we prove that the non-negativity constraints in the original LBC formulation is not necessary, and can be removed without affecting the solution of the optimization problem. Furthermore, the removal of this constraint allows us to reformulate the computation of LBC as a convex constrained optimization for its gradients, followed by a fast integration to recover the coordinate values. The reformulated gradient optimization problem can be solved using ADMM, where each step is trivially parallelizable and does not involve global linear system solving, making it much more scalable and efficient than the original LBC solver. Numerical experiments verify the effectiveness of our technique on a large variety of models

    A 3D+t Laplace operator for temporal mesh sequences

    Get PDF
    International audienceThe Laplace operator plays a fundamental role in geometry processing. Several discrete versions have been proposed for 3D meshes and point clouds, among others. We define here a discrete Laplace operator for temporally coherent mesh sequences, which allows to process mesh animations in a simple yet efficient way. This operator is a discretization of the Laplace-Beltrami operator using Discrete Exterior Calculus on CW complexes embedded in a four-dimensional space. A parameter is introduced to tune the influence of the motion with respect to the geometry. This enables straightforward generalization of existing Laplacian static mesh processing works to mesh sequences. An application to spacetime editing is provided as example

    Free-form motion processing

    Full text link

    Example-based retargeting of human motion to arbitrary mesh models

    Get PDF
    We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion-retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose-to-pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. Also considering the fact that most publicly available mesh models lack additional structure (e.g. skeleton), our method dispenses with the need for such a structure by means of a built-in surface-based deformation system. As deformation for animation purposes may require non-rigid behaviour, we augment existing rigid deformation approaches to provide volume-preserving and squash-and-stretch deformations. We demonstrate our approach on well-known mesh models along with several publicly available motion-capture sequences. We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion-retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose-to-pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. © 2014 The Eurographics Association and John Wiley & Sons Ltd

    Example-based hair geometry synthesis

    Full text link

    적은 수의 사용자 입력으로부터 인간 동작의 합성 및 편집

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 이제희.An ideal 3D character animation system can easily synthesize and edit human motion and also will provide an efficient user interface for an animator. However, despite advancements of animation systems, building effective systems for synthesizing and editing realistic human motion still remains a difficult problem. In the case of a single character, the human body is a significantly complex structure because it consists of as many as hundreds of degrees of freedom. An animator should manually adjust many joints of the human body from user inputs. In a crowd scene, many individuals in a human crowd have to respond to user inputs when an animator wants a given crowd to fit a new environment. The main goal of this thesis is to improve interactions between a user and an animation system. As 3D character animation systems are usually driven by low-dimensional inputs, there is no method for a user to directly generate a high-dimensional character animation. To address this problem, we propose a data-driven mapping model that is built by motion data obtained from a full-body motion capture system, crowd simulation, and data-driven motion synthesis algorithm. With the data-driven mapping model in hand, we can transform low-dimensional user inputs into character animation because motion data help to infer missing parts of system inputs. As motion capture data have many details and provide realism of the movement of a human, it is easier to generate a realistic character animation than without motion capture data. To demonstrate the generality and strengths of our approach, we developed two animation systems that allow the user to synthesize a single character animation in realtime and edit crowd animation via low-dimensional user inputs interactively. The first system entails controlling a virtual avatar using a small set of three-dimensional (3D) motion sensors. The second system manipulates large-scale crowd animation that consists of hundreds of characters with a small number of user constraints. Examples show that our system is much less laborious and time-consuming than previous animation systems, and thus is much more suitable for a user to generate desired character animation.Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2 Background 10 2.1 Performance Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.1 Performance-based Interfaces for Character Animation . . . . . . . 11 2.1.2 Statistical Models for Motion Synthesis . . . . . . . . . . . . . . . 12 2.1.3 Retrieval of Motion Capture Data . . . . . . . . . . . . . . . . . . 13 2.2 Crowd Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.1 Crowd Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.2 Motion Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.3 Geometry Deformation . . . . . . . . . . . . . . . . . . . . . . . . 15 3 Realtime Performance Animation Using Sparse 3D Motion Sensors 17 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3 Sensor Data and Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.4 Motion Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.1 Online Local Model . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.2 Kernel CCA-based Regression . . . . . . . . . . . . . . . . . . . . 25 3.4.3 Motion Post-processing . . . . . . . . . . . . . . . . . . . . . . . 27 3.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4 Interactive Manipulation of Large-Scale Crowd Animation 40 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 Crowd Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.3 Cage-based Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3.1 Cage Construction . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.3.2 Cage Representation . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.4 Editing Crowd Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.4.1 Spatial Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.4.2 Temporal Manipulation . . . . . . . . . . . . . . . . . . . . . . . . 57 4.5 Collision Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5 Conclusion 69 Bibliography I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XIIIDocto

    Human Motion Analysis Using Very Few Inertial Measurement Units

    Get PDF
    Realistic character animation and human motion analysis have become major topics of research. In this doctoral research work, three different aspects of human motion analysis and synthesis have been explored. Firstly, on the level of better management of tens of gigabytes of publicly available human motion capture data sets, a relational database approach has been proposed. We show that organizing motion capture data in a relational database provides several benefits such as centralized access to major freely available mocap data sets, fast search and retrieval of data, annotations based retrieval of contents, entertaining data from non-mocap sensor modalities etc. Moreover, the same idea is also proposed for managing quadruped motion capture data. Secondly, a new method of full body human motion reconstruction using very sparse configuration of sensors is proposed. In this setup, two sensor are attached to the upper extremities and one sensor is attached to the lower trunk. The lower trunk sensor is used to estimate ground contacts, which are later used in the reconstruction process along with the low dimensional inputs from the sensors attached to the upper extremities. The reconstruction results of the proposed method have been compared with the reconstruction results of the existing approaches and it has been observed that the proposed method generates lower average reconstruction errors. Thirdly, in the field of human motion analysis, a novel method of estimation of human soft biometrics such as gender, height, and age from the inertial data of a simple human walk is proposed. The proposed method extracts several features from the time and frequency domains for each individual step. A random forest classifier is fed with the extracted features in order to estimate the soft biometrics of a human. The results of classification have shown that it is possible with a higher accuracy to estimate the gender, height, and age of a human from the inertial data of a single step of his/her walk

    Gradient domain editing of deforming mesh sequences

    No full text
    Many graphics applications, including computer games and 3D animated films, make heavy use of deforming mesh sequences. In this paper, we generalize gradient domain editing to deforming mesh sequences. Our framework is keyframe based. Given sparse and irregularly distributed constraints at unevenly spaced keyframes, our solution first adjusts the meshes at the keyframes to satisfy these constraints, and then smoothly propagate the constraints and deformations at keyframes to the whole sequence to generate new deforming mesh sequence. To achieve convenient keyframe editing, we have developed an efficient alternating least-squares method. It harnesses the power of subspace deformation and two-pass linear methods to achieve high-quality deformations. We have also developed an effective algorithm to define boundary conditions for all frames using handle trajectory editing. Our deforming mesh editing framework has been successfully applied to a number of editing scenarios with increasing complexity, including footprint editing, path editing, temporal filtering, handle-based deformation mixing, and spacetime morphing. © 2007 ACM.link_to_subscribed_fulltex

    Gradient Domain Editing of Deforming Mesh Sequences

    No full text
    Figure 1: A straight run is adapted to a curved path on an uneven terrain. The original deforming mesh sequence moves along a straight line on a plane. We first make the HORSE move along a curve using path editing, and then adapt the sequence onto the terrain using footprint editing. Many graphics applications, including computer games and 3D animated films, make heavy use of deforming mesh sequences. In this paper, we generalize gradient domain editing to deforming mesh sequences. Our framework is keyframe based. Given sparse and irregularly distributed constraints at unevenly spaced keyframes, our solution first adjusts the meshes at the keyframes to satisfy these constraints, and then smoothly propagate the constraints and deformations at keyframes to the whole sequence to generate new deforming mesh sequence. To achieve convenient keyframe editing, we have developed an efficient alternating least-squares method. It harnesses the power of subspace deformation and two-pass linear methods to achieve high-quality deformations. We have also developed an effective algorithm to define boundary conditions for all frames using handle trajectory editing. Our deforming mesh editing framework has been successfully applied to a number of editing scenarios with increasing complexity, including footprint editing, path editing, temporal filtering, handle-based deformation mixing, and spacetime morphing. † This work was done while Qifeng Tan was an intern at Microsoft Re-search Asia
    corecore