7 research outputs found

    From geometrical modelling to simulation of touch of textile products - open modelling issues

    Get PDF
    The touch of textile products is a complex process, depending on the interaction between the human finger and the textile product. The evaluation of the touch, or so named handle properties is complex process, requiring samples, testing humans or special testing devices. The numerical evaluation of the surface is until now not reported, because of the complexity of the textile products. This work presents the current state of the 3D modelling of textile products at yarn and fiber level and the required additional steps in order these models to get applicable for numerical simulation of the fabric touch. This work cover only the aspects related to the textile representation and do not include the modelling of the human finger as mechanical and receptor system during the interaction.Die Berührung und der Griffevaluation von Textilprodukten sind komplexe Prozesse, die von der Interaktion zwischen dem menschlichen Finger und dem Textilprodukt abhängig sind. Die Bewertung der 'Touch'- oder so genannten Griffeigenschaften ist ein komplexer Prozess, der Proben, Testpersonen oder spezielle Testgeräte erfordert. Die numerische Simulation der Oberflächenbeschaffenheiten ist aufgrund der Komplexität der textilen Produkte bisher nicht bekannt. Diese Arbeit stellt den aktuellen Stand der 3D-Modellierung von Textilprodukten auf Garn- und Faserebene und die erforderlichen zusätzlichen Schritte vor, damit diese Modelle für die numerische Simulation der Haptik von Textilien eingesetzt werden können. Es werden nur die Aspekte abgedeckt, die sich auf die Darstellung der Textilien beziehen und beinhaltet nicht die Modellierung des menschlichen Fingers als mechanisches und rezeptives System während der Interaktion

    가상 의복의 생성, 수정 및 시뮬레이션을 위한 조작이 간편하고 문제를 발생시키지 않는 방법에 대한 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 자연과학대학 협동과정 계산과학전공, 2016. 2. 고형석.This dissertation presents new methods for the construction, editing, and simulation of virtual garments. First, we describe a construction method called TAGCON, which constructs three-dimensional (3D) virtual garments from the given tagged and packed panels. Tagging and packing are performed by the user, and involve simple labeling and two-dimensional (2D) manipulation of the panelshowever, it does not involve any 3D manipulation. Then, TAGCON constructs the garment automatically by using algorithms that (1) position the panels at suitable locations around the body, and (2) find the matching seam lines and create the seam. We perform experiments using TAGCON to construct various types of garments. The proposed method significantly reduces the construction time and cumbersomeness. Secondly, we propose a method to edit virtual garments with synced 2D and 3D modification. The presented methods of linear interpolation, extrapolation, and penetration detection help users to edit the virtual garment interactively without the loss of 2D and 3D synchronization. After that, we propose a method to model the non-elastic components in the fabric stretch deformation in the context of developing physically based fabric simulator. We find that the above problem can be made tractable if we decompose the stretch deformation into the immediate elastic, viscoelastic, and plastic components. For the purpose of the simulator development, the decomposition must be possible at any stage of deformation and any occurrence of loading and unloading. Based on the observations of various constant force creep measurements, we make an assumption that, within a particular fabric, the viscoelastic and plastic components are proportional to each other and their ratio is invariant over time. Experimental results produced with the proposed method match with general expectations, and show that the method can represent the non-elastic stretch deformation for arbitrary time-varying force. In addition, we present a method to represent stylish elements of garments such as pleats and lapels. Experimental results show that the proposed method is effective at resolving problems that are not easily resolved using physically based cloth simulators.Chapter 1 Introduction 1 1.1 Digital Clothing 1 1.2 Garment Modeling 5 1.3 Physical Cloth Simulation 7 1.4 Dissertation Overview 9 Chapter 2 Previous Work 11 2.1 Garment Modeling 11 2.2 Physical Cloth Simulation 15 Chapter 3 Automatic Garment Construction from Pattern Analysis 17 3.1 Panel Classification 19 3.1.1 Panel Tagging 19 3.1.2 Panel Packing 22 3.1.3 Tagging-and-Packing Process 23 3.2 Classification of Seam-Line 24 3.3 Seam Creation 25 3.3.1 Creating the Intra-pack Seams 26 3.3.2 Creating the Inter-pack Seams 27 3.3.3 Creating the Inter-layer Seams 30 3.3.4 Seam-creation Process 31 3.4 Experiments 32 3.5 Conclusion 34 Chapter 4 Synced Garment Editing 39 4.1 Introduction to Synced Garment Editing 39 4.2 Geometric Approaches vs. Sensitivity Analysis 41 4.3 Trouble Free Synced Garment Editing 43 Chapter 5 Physically Based Non-Elastic Clothing Simulation 49 5.1 Classification of Deformation 50 5.2 Modeling Non-Elastic Deformations 53 5.2.1 Development of the Non-Elastic Model 55 5.2.2 Parameter Value Determination 60 5.3 Implementation 61 5.4 Experiments 65 Chapter 6 Tangle Avoidance with Pre-Folding 73 6.1 Problem of the First Frame Tangle 73 6.2 Tangle Avoidance with Pre-Folding 75 Chapter 7 Conclusion 81 Appendix A Simplification in the Decomposition of Stretch Deformation 85 Bibliography 87 초 록 99Docto

    Virtual Garment Resizing and Capturing Based on the Parametrized Draft

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 2. 고형석.This dissertation presents novel frameworks for virtual garment resizing and capturing. In the clothing industry, ready-to-wear apparel is designed from standard body, and then it is resized to fit specific body. The resizing job is called grading. Grading requires specialized tailoring techniques and extremely time. We suggest fast and simple grading technique for virtual clothing. For generating virtual garment according to real garment, pattern designing and modeling knowledge are demanded. We propose a method which converts from real garment into virtual garment. There are in need of the virtual clothes grading and modeling methods in the animation and game productions, since costume design takes an important component in the process. To perform grading job, we introduced retargeting technique which is widely utilized in the computer graphics field. Retargeting technique demands the mediator and the correspondence function. For the mediator of our method, we got the insight from the process of drawing the pattern-making draft. Noting that the draft can be completely determined by supplying the primary body sizes and the garment type, we implemented a computer module which performs the draft construction process. The module is called the parameterized draft module. Barycentric coordinates system is a reasonable method for making correspondence between garment drafts and panels on 2D. Among others, the Mean Value Coordinates (MVC) would be an excellent choice. We call this grading method Draft-space Warping. The proposed grading method can be performed instantly for any given body without calling for the user intervention. Our approach can minimize designers specialized know-how and save performing time for the grading of real and virtual clothes. Also we suggest compensation techniques to improve the quality of grading. With experimental results, we show that the new grading framework can bring an improvement to garment grading. Also, we investigated a method which can create the virtual garment from a single photograph of a real garment put on to the mannequin. Similar as our resizing method, we used pattern drafting theory in solving this problem. We utilize parameterized draft module which was introduced in draft-space warping. Then the capturing problem is reduced to find out the garment type and primary body sizes. We determine that information by analyzing the silhouette of the garment with respect to the mannequin. The method works robustly and produces practically usable virtual clothes which can be used for the graphical coordination. Both methods are devised based on the pattern-making draft. Since proposed methods perform resizing and modeling jobs on 2D, we reduce computation time for the jobs. Although, we can get the plausible results.Chapter 1 Introduction 1 1.1 Virtual Clothing Techniques 2 1.2 Motivation 5 1.2.1 Garment Resizing 5 1.2.2 Garment Creating from a Photograph 11 1.3 Contribution 14 1.4 Terminology 15 Chapter 2 Previous Work 17 2.1 Garment Resizing 18 2.1.1 Algorithms for Garment Resizing 18 2.1.2 Methods for Draft-space Encoding 19 2.2 Garment Modeling 21 2.2.1 Garment Creating 22 2.2.2 Clothes Classification 24 Chapter 3 Background 27 3.1 Introduction to the Pattern-drafting 27 3.2 Judging the Quality in the Draft-based Method 32 Chapter 4 Garment Resizing 35 4.1 Problem Description 35 4.2 Overview 36 4.3 Draft-Space Encoding and Decoding 40 4.3.1 Triangular Barycentric Coordinates 41 4.3.2 Coordinates Systems for Polygon 42 4.3.3 Comparison 46 4.4 Linear Grading using Base Draft 49 4.5 Dart Compensation 50 4.6 Results 53 4.6.1 Generation of Target Drafts 55 4.6.2 Generation of Panels 56 4.6.3 Primary Body Sizes Analysis 56 4.6.4 Silhouette Analysis 58 4.6.5 Strain Analysis 60 4.6.6 Air-Gap Analysis 63 4.6.7 Redesign using DSW 64 4.7 Discussion 66 4.8 Conclusion 67 Chapter 5 Garment Capture from a Photograph 69 5.1 Overview 69 5.2 Garment Capture 71 5.2.1 Off-line Photographing Set up 72 5.2.2 Obtaining the Garment Silhouette 72 5.2.3 Identifying the Garment Type 73 5.2.4 Identifying the PBSs 74 5.2.5 Texture Extraction 75 5.2.6 Generating the Draft and Panels 77 5.3 Results 78 5.4 Discussion 82 5.5 Conclusion 83 Chapter 6 Conclusion 85 Appendix A Implementing Local Coordinates Systems 89 Bibliography 95 초 록 111Docto

    Visual Prototyping of Cloth

    Get PDF
    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture appearance models of cloth, especially when considering computer aided design of cloth. Previous methods can be used to produce highly realistic images, however, possibilities for cloth-editing are either restricted or require the measurement of large material databases to capture all variations of cloth samples. We propose a pipeline for designing the appearance of cloth directly based on those elements that can be changed within the production process. These are optical properties of fibers, geometrical properties of yarns and compositional elements such as weave patterns. We introduce a geometric yarn model, integrating state-of-the-art textile research. We further present an approach to reverse engineer cloth and estimate parameters for a procedural cloth model from single images. This includes the automatic estimation of yarn paths, yarn widths, their variation and a weave pattern. We demonstrate that we are able to match the appearance of original cloth samples in an input photograph for several examples. Parameters of our model are fully editable, enabling intuitive appearance design. Unfortunately, such explicit fiber-based models can only be used to render small cloth samples, due to large storage requirements. Recently, bidirectional texture functions (BTFs) have become popular for efficient photo-realistic rendering of materials. We present a rendering approach combining the strength of a procedural model of micro-geometry with the efficiency of BTFs. We propose a method for the computation of synthetic BTFs using Monte Carlo path tracing of micro-geometry. We observe that BTFs usually consist of many similar apparent bidirectional reflectance distribution functions (ABRDFs). By exploiting structural self-similarity, we can reduce rendering times by one order of magnitude. This is done in a process we call non-local image reconstruction, which has been inspired by non-local means filtering. Our results indicate that synthesizing BTFs is highly practical and may currently only take a few minutes for small BTFs. We finally propose a novel and general approach to physically accurate rendering of large cloth samples. By using a statistical volumetric model, approximating the distribution of yarn fibers, a prohibitively costly, explicit geometric representation is avoided. As a result, accurate rendering of even large pieces of fabrics becomes practical without sacrificing much generality compared to fiber-based techniques

    Photo-Realistic Rendering of Fiber Assemblies

    Get PDF
    In this thesis we introduce a novel uniform formalism for light scattering from filaments, the Bidirectional Fiber Scattering Distribution Function (BFSDF). Similar to the role of the Bidirectional Surface Scattering Reflectance Distribution Function (BSSRDF) for surfaces, the BFSDF can be seen as a general approach for describing light scattering from filaments. Based on this theoretical foundation, approximations for various levels of abstraction are derived allowing for efficient and accurate rendering of fiber assemblies, such as hair or fur. In this context novel rendering techniques accounting for all prominent effects of local and global illumination are presented. Moreover, physically-based analytical BFSDF models for human hair and other kinds of fibers are derived. Finally, using the model for human hair we make a first step towards image-based BFSDF reconstruction, where optical properties of a single strand are estimated from "synthetic photographs" (renderings) a full hairstyle

    Realistic Visualization of Animated Virtual Cloth

    Get PDF
    Photo-realistic rendering of real-world objects is a broad research area with applications in various different areas, such as computer generated films, entertainment, e-commerce and so on. Within photo-realistic rendering, the rendering of cloth is a subarea which involves many important aspects, ranging from material surface reflection properties and macroscopic self-shadowing to animation sequence generation and compression. In this thesis, besides an introduction to the topic plus a broad overview of related work, different methods to handle major aspects of cloth rendering are described. Material surface reflection properties play an important part to reproduce the look & feel of materials, that is, to identify a material only by looking at it. The BTF (bidirectional texture function), as a function of viewing and illumination direction, is an appropriate representation of reflection properties. It captures effects caused by the mesostructure of a surface, like roughness, self-shadowing, occlusion, inter-reflections, subsurface scattering and color bleeding. Unfortunately a BTF data set of a material consists of hundreds to thousands of images, which exceeds current memory size of personal computers by far. This work describes the first usable method to efficiently compress and decompress a BTF data for rendering at interactive to real-time frame rates. It is based on PCA (principal component analysis) of the BTF data set. While preserving the important visual aspects of the BTF, the achieved compression rates allow the storage of several different data sets in main memory of consumer hardware, while maintaining a high rendering quality. Correct handling of complex illumination conditions plays another key role for the realistic appearance of cloth. Therefore, an upgrade of the BTF compression and rendering algorithm is described, which allows the support of distant direct HDR (high-dynamic-range) illumination stored in environment maps. To further enhance the appearance, macroscopic self-shadowing has to be taken into account. For the visualization of folds and the life-like 3D impression, these kind of shadows are absolutely necessary. This work describes two methods to compute these shadows. The first is seamlessly integrated into the illumination part of the rendering algorithm and optimized for static meshes. Furthermore, another method is proposed, which allows the handling of dynamic objects. It uses hardware-accelerated occlusion queries for the visibility determination. In contrast to other algorithms, the presented algorithm, despite its simplicity, is fast and produces less artifacts than other methods. As a plus, it incorporates changeable distant direct high-dynamic-range illumination. The human perception system is the main target of any computer graphics application and can also be treated as part of the rendering pipeline. Therefore, optimization of the rendering itself can be achieved by analyzing human perception of certain visual aspects in the image. As a part of this thesis, an experiment is introduced that evaluates human shadow perception to speedup shadow rendering and provides optimization approaches. Another subarea of cloth visualization in computer graphics is the animation of the cloth and avatars for presentations. This work also describes two new methods for automatic generation and compression of animation sequences. The first method to generate completely new, customizable animation sequences, is based on the concept of finding similarities in animation frames of a given basis sequence. Identifying these similarities allows jumps within the basis sequence to generate endless new sequences. Transmission of any animated 3D data over bandwidth-limited channels, like extended networks or to less powerful clients requires efficient compression schemes. The second method included in this thesis in the animation field is a geometry data compression scheme. Similar to the BTF compression, it uses PCA in combination with clustering algorithms to segment similar moving parts of the animated objects to achieve high compression rates in combination with a very exact reconstruction quality.Realistische Visualisierung von animierter virtueller Kleidung Das photorealistisches Rendering realer Gegenstände ist ein weites Forschungsfeld und hat Anwendungen in vielen Bereichen. Dazu zählen Computer generierte Filme (CGI), die Unterhaltungsindustrie und E-Commerce. Innerhalb dieses Forschungsbereiches ist das Rendern von photorealistischer Kleidung ein wichtiger Bestandteil. Hier reichen die wichtigen Aspekte, die es zu berücksichtigen gilt, von optischen Materialeigenschaften über makroskopische Selbstabschattung bis zur Animationsgenerierung und -kompression. In dieser Arbeit wird, neben der Einführung in das Thema, ein weiter Überblick über ähnlich gelagerte Arbeiten gegeben. Der Schwerpunkt der Arbeit liegt auf den wichtigen Aspekten der virtuellen Kleidungsvisualisierung, die oben beschrieben wurden. Die optischen Reflektionseigenschaften von Materialoberflächen spielen eine wichtige Rolle, um das so genannte look & feel von Materialien zu charakterisieren. Hierbei kann ein Material vom Nutzer identifiziert werden, ohne dass er es direkt anfassen muss. Die BTF (bidirektionale Texturfunktion)ist eine Funktion die abhängig von der Blick- und Beleuchtungsrichtung ist. Daher ist sie eine angemessene Repräsentation von Reflektionseigenschaften. Sie enthält Effekte wie Rauheit, Selbstabschattungen, Verdeckungen, Interreflektionen, Streuung und Farbbluten, die durch die Mesostruktur der Oberfläche hervorgerufen werden. Leider besteht ein BTF Datensatz eines Materials aus hunderten oder tausenden von Bildern und sprengt damit herkömmliche Hauptspeicher in Computern bei weitem. Diese Arbeit beschreibt die erste praktikable Methode, um BTF Daten effizient zu komprimieren, zu speichern und für Echtzeitanwendungen zum Visualisieren wieder zu dekomprimieren. Die Methode basiert auf der Principal Component Analysis (PCA), die Daten nach Signifikanz ordnet. Während die PCA die entscheidenen visuellen Aspekte der BTF erhält, können mit ihrer Hilfe Kompressionsraten erzielt werden, die es erlauben mehrere BTF Materialien im Hauptspeicher eines Consumer PC zu verwalten. Dies erlaubt ein High-Quality Rendering. Korrektes Verwenden von komplexen Beleuchtungssituationen spielt eine weitere, wichtige Rolle, um Kleidung realistisch erscheinen zu lassen. Daher wird zudem eine Erweiterung des BTF Kompressions- und Renderingalgorithmuses erläutert, die den Einsatz von High-Dynamic Range (HDR) Beleuchtung erlaubt, die in environment maps gespeichert wird. Um die realistische Erscheinung der Kleidung weiter zu unterstützen, muss die makroskopische Selbstabschattung integriert werden. Für die Visualisierung von Falten und den lebensechten 3D Eindruck ist diese Art von Schatten absolut notwendig. Diese Arbeit beschreibt daher auch zwei Methoden, diese Schatten schnell und effizient zu berechnen. Die erste ist nahtlos in den Beleuchtungspart des obigen BTF Renderingalgorithmuses integriert und für statische Geometrien optimiert. Die zweite Methode behandelt dynamische Objekte. Dazu werden hardwarebeschleunigte Occlusion Queries verwendet, um die Sichtbarkeitsberechnung durchzuführen. Diese Methode ist einerseits simpel und leicht zu implementieren, anderseits ist sie schnell und produziert weniger Artefakte, als vergleichbare Methoden. Zusätzlich ist die Verwendung von veränderbarer, entfernter HDR Beleuchtung integriert. Das menschliche Wahrnehmungssystem ist das eigentliche Ziel jeglicher Anwendung in der Computergrafik und kann daher selbst als Teil einer erweiterten Rendering Pipeline gesehen werden. Daher kann das Rendering selbst optimiert werden, wenn man die menschliche Wahrnehmung verschiedener visueller Aspekte der berechneten Bilder analysiert. Teil der vorliegenden Arbeit ist die Beschreibung eines Experimentes, das menschliche Schattenwahrnehmung untersucht, um das Rendern der Schatten zu beschleunigen. Ein weiteres Teilgebiet der Kleidungsvisualisierung in der Computergrafik ist die Animation der Kleidung und von Avataren für Präsentationen. Diese Arbeit beschreibt zwei neue Methoden auf diesem Teilgebiet. Einmal ein Algorithmus, der für die automatische Generierung neuer Animationssequenzen verwendet werden kann und zum anderen einen Kompressionsalgorithmus für eben diese Sequenzen. Die automatische Generierung von völlig neuen, anpassbaren Animationen basiert auf dem Konzept der Ähnlichkeitssuche. Hierbei werden die einzelnen Schritte von gegebenen Basisanimationen auf Ähnlichkeiten hin untersucht, die zum Beispiel die Geschwindigkeiten einzelner Objektteile sein können. Die Identifizierung dieser Ähnlichkeiten erlaubt dann Sprünge innerhalb der Basissequenz, die dazu benutzt werden können, endlose, neue Sequenzen zu erzeugen. Die Übertragung von animierten 3D Daten über bandbreitenlimitierte Kanäle wie ausgedehnte Netzwerke, Mobilfunk oder zu sogenannten thin clients erfordert eine effiziente Komprimierung. Die zweite, in dieser Arbeit vorgestellte Methode, ist ein Kompressionsschema für Geometriedaten. Ähnlich wie bei der Kompression von BTF Daten wird die PCA in Verbindung mit Clustering benutzt, um die animierte Geometrie zu analysieren und in sich ähnlich bewegende Teile zu segmentieren. Diese erkannten Segmente lassen sich dann hoch komprimieren. Der Algorithmus arbeitet automatisch und erlaubt zudem eine sehr exakte Rekonstruktionsqualität nach der Dekomprimierung

    MaxControl: ein objektorientiertes Werkzeug zur automatischen Erstellung von 3D-Computeranimationsfilmen und dessen Integration in eine professionelle 3D-Animationssoftware

    Get PDF
    Durch die wachsende Computerleistung können 3D-Computeranimationen sehr komplexe Szenen mit ebenso komplexen zeitlichen Abläufen animiert darstellen. Jedoch ist neben der Erstellung einer komplexen Szene auch deren Animation entsprechend aufwendig, wenn keine Automatisierungen genutzt werden können. Das in dieser Arbeit entwickelte Werkzeug MaxControl ist ein System zur automatischen Animation von nicht-interaktiven 3D-Animationsfilmen. Dies wird durch die Simulation von Verhaltensweisen erreicht, die den 3D-Objekten zugewiesen werden
    corecore