12 research outputs found
A Tool for Creating Expressive Control Over Fur and Feathers
The depiction of body fur and feathers has received relatively abundant focus within the animation production environment and continues to pose significant computational challenges. Tools to control fur and feathers as an expressive characteristic to be used by animators have not been explored as fully as dynamic control systems. This thesis outlines research behind and development of a control system for fur and feathers intended to enable authoring of animation in an interactive software tool common in many animation production environments. The results of this thesis show a control system over fur and feathers as easily used as appendages control to create strong posing, silhouette and timing of animations. The tool created impacts the capacity of more effective and efficient animation of characters that use fur and feathers for expressive communication such as hedgehogs, birds, and cats
Dynamic Rigging Menggunakan Expression pada Maya 3D
Penelitian ini mencakup pembuatan dan penggunaan tool rigging otomatis pada aplikasi 3D Maya. Tujuan penelitian ini adalah menghasilkan tool dynamic rigging untuk membangun rig serta mengontrol gerakan dinamik pada animasi. Penggunaan dynamic rigging dapat menurunkan performa software, namun pada Maya terdapat expression yang dapat mengkalkulasi gerakan menggunakan bahasa MEL, sehingga dapat meningkatkan kinerja software. Penelitian terbagi dalam tahapan merancang logika dynamic rig, script, dan antarmuka, implementasi pembuatan script, serta implementasi tool yang dihasilkan pada objek 3D. Script Phyton untuk menghasilkan UI tool dan komponen dynamic rig, sedangkan script MEL pada node expression untuk menghasilkan gerakan rambut. Dalam penelitian ini, penulis menggunakan 3 objek yaitu tentakel, rantai, dan rambut. Gerakan dinamik dihasilkan dari proses kalkulasi rumus Fisika Dasar yaitu kecepatan dan percepatan yang dijadikan dalam bahasa MEL pada node expression. Uji coba pada 3 objek tersebut berhasil memperlihatkan gerakan dinamik yang diperlukan dalam pembuatan animasi
Dynamic 3D Avatar Creation from Hand-held Video Input
We present a complete pipeline for creating fully rigged, personalized 3D facial avatars from hand-held video. Our system faithfully recovers facial expression dynamics of the user by adapting a blendshape template to an image sequence of recorded expressions using an optimization that integrates feature tracking, optical flow, and shape from shading. Fine-scale details such as wrinkles are captured separately in normal maps and ambient occlusion maps. From this user- and expression-specific data, we learn a regressor for on-the-fly detail synthesis during animation to enhance the perceptual realism of the avatars. Our system demonstrates that the use of appropriate reconstruction priors yields compelling face rigs even with a minimalistic acquisition system and limited user assistance. This facilitates a range of new applications in computer animation and consumer-level online communication based on personalized avatars. We present realtime application demos to validate our method
๊ฐ์ ์๋ณต์ ์์ฑ, ์์ ๋ฐ ์๋ฎฌ๋ ์ด์ ์ ์ํ ์กฐ์์ด ๊ฐํธํ๊ณ ๋ฌธ์ ๋ฅผ ๋ฐ์์ํค์ง ์๋ ๋ฐฉ๋ฒ์ ๋ํ ์ฐ๊ตฌ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ์์ฐ๊ณผํ๋ํ ํ๋๊ณผ์ ๊ณ์ฐ๊ณผํ์ ๊ณต, 2016. 2. ๊ณ ํ์.This dissertation presents new methods for the construction, editing, and simulation of virtual garments. First, we describe a construction method called TAGCON, which constructs three-dimensional (3D) virtual garments from the given tagged and packed panels. Tagging and packing are performed by the user, and involve simple labeling and two-dimensional (2D) manipulation of the panelshowever, it does not involve any 3D manipulation. Then, TAGCON constructs the garment automatically by using algorithms that (1) position the panels at suitable locations around the body, and (2) find the matching seam lines and create the seam. We perform experiments using TAGCON to construct various types of garments. The proposed method significantly reduces the construction time and cumbersomeness.
Secondly, we propose a method to edit virtual garments with synced 2D and 3D modification. The presented methods of linear interpolation, extrapolation, and penetration detection help users to edit the virtual garment interactively without the loss of 2D and 3D synchronization.
After that, we propose a method to model the non-elastic components in the fabric stretch deformation in the context of developing physically based fabric simulator. We find that the above problem can be made tractable if we decompose the stretch deformation into the immediate elastic, viscoelastic, and plastic components. For the purpose of the simulator development, the decomposition must be possible at any stage of deformation and any occurrence of loading and unloading. Based on the observations of various constant force creep measurements, we make an assumption that, within a particular fabric, the viscoelastic and plastic components are proportional to each other and their ratio is invariant over time. Experimental results produced with the proposed method match with general expectations, and show that the method can represent the non-elastic stretch deformation for arbitrary time-varying force.
In addition, we present a method to represent stylish elements of garments such as pleats and lapels. Experimental results show that the proposed method is effective at resolving problems that are not easily resolved using physically based cloth simulators.Chapter 1 Introduction 1
1.1 Digital Clothing 1
1.2 Garment Modeling 5
1.3 Physical Cloth Simulation 7
1.4 Dissertation Overview 9
Chapter 2 Previous Work 11
2.1 Garment Modeling 11
2.2 Physical Cloth Simulation 15
Chapter 3 Automatic Garment Construction from Pattern Analysis 17
3.1 Panel Classification 19
3.1.1 Panel Tagging 19
3.1.2 Panel Packing 22
3.1.3 Tagging-and-Packing Process 23
3.2 Classification of Seam-Line 24
3.3 Seam Creation 25
3.3.1 Creating the Intra-pack Seams 26
3.3.2 Creating the Inter-pack Seams 27
3.3.3 Creating the Inter-layer Seams 30
3.3.4 Seam-creation Process 31
3.4 Experiments 32
3.5 Conclusion 34
Chapter 4 Synced Garment Editing 39
4.1 Introduction to Synced Garment Editing 39
4.2 Geometric Approaches vs. Sensitivity Analysis 41
4.3 Trouble Free Synced Garment Editing 43
Chapter 5 Physically Based Non-Elastic Clothing Simulation 49
5.1 Classification of Deformation 50
5.2 Modeling Non-Elastic Deformations 53
5.2.1 Development of the Non-Elastic Model 55
5.2.2 Parameter Value Determination 60
5.3 Implementation 61
5.4 Experiments 65
Chapter 6 Tangle Avoidance with Pre-Folding 73
6.1 Problem of the First Frame Tangle 73
6.2 Tangle Avoidance with Pre-Folding 75
Chapter 7 Conclusion 81
Appendix A Simplification in the Decomposition of Stretch Deformation 85
Bibliography 87
์ด ๋ก 99Docto
Realtime Face Tracking and Animation
Capturing and processing human geometry, appearance, and motion is at the core of computer graphics, computer vision, and human-computer interaction. The high complexity of human geometry and motion dynamics, and the high sensitivity of the human visual system to variations and subtleties in faces and bodies make the 3D acquisition and reconstruction of humans in motion a challenging task. Digital humans are often created through a combination of 3D scanning, appearance acquisition, and motion capture, leading to stunning results in recent feature films. However, these methods typically require complex acquisition systems and substantial manual post-processing. As a result, creating and animating high-quality digital avatars entails long turn-around times and substantial production costs. Recent technological advances in RGB-D devices, such as Microsoft Kinect, brought new hopes for realtime, portable, and affordable systems allowing to capture facial expressions as well as hand and body motions. RGB-D devices typically capture an image and a depth map. This permits to formulate the motion tracking problem as a 2D/3D non-rigid registration of a deformable model to the input data. We introduce a novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization. This led to unprecedented face tracking quality on a low cost consumer level device. The main drawback of this approach in the context of consumer applications is the need for an offline user-specific training. Robust and efficient tracking is achieved by building an accurate 3D expression model of the user's face who is scanned in a predefined set of facial expressions. We extended this approach removing the need of a user-specific training or calibration, or any other form of manual assistance, by modeling online a 3D user-specific dynamic face model. In complement of a realtime face tracking and modeling algorithm, we developed a novel system for animation retargeting that allows learning a high-quality mapping between motion capture data and arbitrary target characters. We addressed one of the main challenges of existing example-based retargeting methods, the need for a large number of accurate training examples to define the correspondence between source and target expression spaces. We showed that this number can be significantly reduced by leveraging the information contained in unlabeled data, i.e. facial expressions in the source or target space without corresponding poses. Finally, we present a novel realtime physics-based animation technique allowing to simulate a large range of deformable materials such as fat, flesh, hair, or muscles. This approach could be used to produce more lifelike animations by enhancing the animated avatars with secondary effects. We believe that the realtime face tracking and animation pipeline presented in this thesis has the potential to inspire numerous future research in the area of computer-generated animation. Already, several ideas presented in thesis have been successfully used in industry and this work gave birth to the startup company faceshift AG
Physics-based Reconstruction and Animation of Humans
Creating digital representations of humans is of utmost importance for applications ranging from entertainment (video games, movies) to human-computer interaction and even psychiatrical treatments. What makes building credible digital doubles difficult is the fact that the human vision system is very sensitive to perceiving the complex expressivity and potential anomalies in body structures and motion. This thesis will present several projects that tackle these problems from two different perspectives: lightweight acquisition and physics-based simulation. It starts by describing a complete pipeline that allows users to reconstruct fully rigged 3D facial avatars using video data coming from a handheld device (e.g., smartphone). The avatars use a novel two-scale representation composed of blendshapes and dynamic detail maps. They are constructed through an optimization that integrates feature tracking, optical flow, and shape from shading. Continuing along the lines of accessible acquisition systems, we discuss a framework for simultaneous tracking and modeling of articulated human bodies from RGB-D data. We show how semantic information can be extracted from the scanned body shapes. In the second half of the thesis, we will deviate from using standard linear reconstruction and animation models, and rather focus on exploiting physics-based techniques that are able to incorporate complex phenomena such as dynamics, collision response and incompressibility of the materials. The first approach we propose assumes that each 3D scan of an actor records his body in a physical steady state and uses a process called inverse physics to extract a volumetric physics-ready anatomical model of him. By using biologically-inspired growth models for the bones, muscles and fat, our method can obtain realistic anatomical reconstructions that can be later on animated using external tracking data such as the one resulting from tracking motion capture markers. This is then extended to a novel physics-based approach for facial reconstruction and animation. We propose a facial animation model which simulates biomechanical muscle contractions in a volumetric head model in order to create the facial expressions seen in the input scans. We then show how this approach allows for new avenues of dynamic artistic control, simulation of corrective facial surgery, and interaction with external forces and objects
Modelling and simulation of flexible instruments for minimally invasive surgical training in virtual reality
Improvements in quality and safety standards in surgical training, reduction in training hours and constant technological advances have challenged the traditional apprenticeship model to create a competent surgeon in a patient-safe way. As a result, pressure on training outside the operating room has increased. Interactive, computer based Virtual Reality (VR) simulators offer a safe, cost-effective, controllable and configurable training environment free from ethical and patient safety issues.
Two prototype, yet fully-functional VR simulator systems for minimally invasive procedures relying on flexible instruments were developed and validated. NOViSE is the first force-feedback enabled VR simulator for Natural Orifice Transluminal Endoscopic Surgery (NOTES) training supporting a flexible endoscope. VCSim3 is a VR simulator for cardiovascular interventions using catheters and guidewires. The underlying mathematical model of flexible instruments in both simulator prototypes is based on an established theoretical framework โ the Cosserat Theory of Elastic Rods. The efficient implementation of the Cosserat Rod model allows for an accurate, real-time simulation of instruments at haptic-interactive rates on an off-the-shelf computer. The behaviour of the virtual tools and its computational performance was evaluated using quantitative and qualitative measures. The instruments exhibited near sub-millimetre accuracy compared to their real counterparts. The proposed GPU implementation further accelerated their simulation performance by approximately an order of magnitude.
The realism of the simulators was assessed by face, content and, in the case of NOViSE, construct validity studies. The results indicate good overall face and content validity of both simulators and of virtual instruments. NOViSE also demonstrated early signs of construct validity. VR simulation of flexible instruments in NOViSE and VCSim3 can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. Moreover, in the context of an innovative and experimental technique such as NOTES, NOViSE could potentially facilitate its development and contribute to its popularization by keeping practitioners up to date with this new minimally invasive technique.Open Acces
Recommended from our members
Harnessing Simulated Data with Graphs
Physically accurate simulations allow for unlimited exploration of arbitrarily crafted environments. From a scientific perspective, digital representations of the real world are useful because they make it easy validate ideas. Virtual sandboxes allow observations to be collected at-will, without intricate setting up for measurements or needing to wait on the manufacturing, shipping, and assembly of physical resources. Simulation techniques can also be utilized over and over again to test the problem without expending costly materials or producing any waste.
Remarkably, this freedom to both experiment and generate data becomes even more powerful when considering the rising adoption of data-driven techniques across engineering disciplines. These are systems that aggregate over available samples to model behavior, and thus are better informed when exposed to more data. Naturally, the ability to synthesize limitless data promises to make approaches that benefit from datasets all the more robust and desirable.
However, the ability to readily and endlessly produce synthetic examples also introduces several new challenges. Data must be collected in an adaptive format that can capture the complete diversity of states achievable in arbitrary simulated configurations while too remaining amenable to downstream applications. The quantity and zoology of observations must also straddle a range which prevents overfitting but is descriptive enough to produce a robust approach. Pipelines that naively measure virtual scenarios can easily be overwhelmed by trying to sample an infinite set of available configurations. Variations observed across multiple dimensions can quickly lead to a daunting expansion of states, all of which must be processed and solved. These and several other concerns must first be addressed in order to safely leverage the potential of boundless simulated data.
In response to these challenges, this thesis proposes to wield graphs in order to instill structure over digitally captured data, and curb the growth of variables. The paradigm of pairing data with graphs introduced in this dissertation serves to enforce consistency, localize operators, and crucially factor out any combinatorial explosion of states. Results demonstrate the effectiveness of this methodology in three distinct areas, each individually offering unique challenges and practical constraints, and together showcasing the generality of the approach. Namely, studies observing state-of-the-art contributions in design for additive manufacturing, side-channel security threats, and large-scale physics based contact simulations are collectively achieved by harnessing simulated datasets with graph algorithms