97 research outputs found

    A hybrid hair model using three dimensional fuzzy textures

    Get PDF
    Cataloged from PDF version of article.Human hair modeling and rendering have always been a challenging topic in computer graphics. The techniques for human hair modeling consist of explicit geometric models as well as volume density models. Recently, hybrid cluster models have also been successful in this subject. In this study, we present a novel three dimensional texture model called 3D Fuzzy Textures and algorithms to generate them. Then, we use the developed model along with a cluster model to give human hair complex hairstyles such as curly and wavy styles. Our model requires little user effort to model curly and wavy hair styles. With this study, we aim at eliminating the drawbacks of the volume density model and the cluster hair model with 3D fuzzy textures. A three dimensional cylindrical texture mapping function is introduced for mapping purposes. Current generation graphics hardware is utilized in the design of rendering system enabling high performance rendering.Aran, Medeni ErolM.S

    Image-Based Approaches to Hair Modeling

    Get PDF
    Hair is a relevant characteristic of virtual characters, therefore the modeling of plausible facial hair and hairstyles is an essential step in the generation of computer generated (CG) avatars. However, the inherent geometric complexity of hair together with the huge number of filaments of an average human head make the task of modeling hairstyles a very challenging one. To date this is commonly a manual process which requires artist skills or very specialized and costly acquisition software. In this work we present an image-based approach to model facial hair (beard and eyebrows) and (head) hairstyles. Since facial hair is usually much shorter than the average head hair two different methods are resented, adapted to the characteristics of the hair to be modeled. Facial hair is modeled using data extracted from facial texture images and missing information is inferred by means of a database-driven prior model. Our hairstyle reconstruction technique employs images of the hair to be modeled taken with a thermal camera. The major advantage of our thermal image-based method over conventional image-based techniques lies on the fact that during data capture the hairstyle is "lit from the inside": the thermal camera captures heat irradiated by the head and actively re-emitted by the hair filaments almost isotropically. Following this approach we can avoid several issues of conventional image-based techniques, like shadowing or anisotropy in reflectance. The presented technique requires minimal user interaction and a simple acquisition setup. Several challenging examples demonstrate the potential of the proposed approach

    Chain Shape Matching for Simulating Complex Hairstyles

    Get PDF
    Animations of hair dynamics greatly enrich the visual attractiveness of human characters. Traditional simulation techniques handle hair as clumps or continuum for efficiency; however, the visual quality is limited because they cannot represent the fine-scale motion of individual hair strands. Although a recent mass-spring approach tackled the problem of simulating the dynamics of every strand of hair, it required a complicated setting of springs and suffered from high computational cost. In this paper, we base the animation of hair on such a fine-scale on Lattice Shape Matching (LSM), which has been successfully used for simulating deformable objects. Our method regards each strand of hair as a chain of particles, and computes geometrically derived forces for the chain based on shape matching. Each chain of particles is simulated as an individual strand of hair. Our method can easily handle complex hairstyles such as curly or afro styles in a numerically stable way. While our method is not physically based, our GPU-based simulator achieves visually plausible animations consisting of several tens of thousands of hair strands at interactive rates

    DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human Avatars

    Full text link
    We present DINAR, an approach for creating realistic rigged fullbody avatars from single RGB images. Similarly to previous works, our method uses neural textures combined with the SMPL-X body model to achieve photo-realistic quality of avatars while keeping them easy to animate and fast to infer. To restore the texture, we use a latent diffusion model and show how such model can be trained in the neural texture space. The use of the diffusion model allows us to realistically reconstruct large unseen regions such as the back of a person given the frontal view. The models in our pipeline are trained using 2D images and videos only. In the experiments, our approach achieves state-of-the-art rendering quality and good generalization to new poses and viewpoints. In particular, the approach improves state-of-the-art on the SnapshotPeople public benchmark

    Computational Aesthetics for Fashion

    Get PDF
    The online fashion industry is growing fast and with it, the need for advanced systems able to automatically solve different tasks in an accurate way. With the rapid advance of digital technologies, Deep Learning has played an important role in Computational Aesthetics, an interdisciplinary area that tries to bridge fine art, design, and computer science. Specifically, Computational Aesthetics aims to automatize human aesthetic judgments with computational methods. In this thesis, we focus on three applications of computer vision in fashion, and we discuss how Computational Aesthetics helps solve them accurately

    Visual Prototyping of Cloth

    Get PDF
    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture appearance models of cloth, especially when considering computer aided design of cloth. Previous methods can be used to produce highly realistic images, however, possibilities for cloth-editing are either restricted or require the measurement of large material databases to capture all variations of cloth samples. We propose a pipeline for designing the appearance of cloth directly based on those elements that can be changed within the production process. These are optical properties of fibers, geometrical properties of yarns and compositional elements such as weave patterns. We introduce a geometric yarn model, integrating state-of-the-art textile research. We further present an approach to reverse engineer cloth and estimate parameters for a procedural cloth model from single images. This includes the automatic estimation of yarn paths, yarn widths, their variation and a weave pattern. We demonstrate that we are able to match the appearance of original cloth samples in an input photograph for several examples. Parameters of our model are fully editable, enabling intuitive appearance design. Unfortunately, such explicit fiber-based models can only be used to render small cloth samples, due to large storage requirements. Recently, bidirectional texture functions (BTFs) have become popular for efficient photo-realistic rendering of materials. We present a rendering approach combining the strength of a procedural model of micro-geometry with the efficiency of BTFs. We propose a method for the computation of synthetic BTFs using Monte Carlo path tracing of micro-geometry. We observe that BTFs usually consist of many similar apparent bidirectional reflectance distribution functions (ABRDFs). By exploiting structural self-similarity, we can reduce rendering times by one order of magnitude. This is done in a process we call non-local image reconstruction, which has been inspired by non-local means filtering. Our results indicate that synthesizing BTFs is highly practical and may currently only take a few minutes for small BTFs. We finally propose a novel and general approach to physically accurate rendering of large cloth samples. By using a statistical volumetric model, approximating the distribution of yarn fibers, a prohibitively costly, explicit geometric representation is avoided. As a result, accurate rendering of even large pieces of fabrics becomes practical without sacrificing much generality compared to fiber-based techniques

    Realistic hair rendering in Autodesk Maya

    Get PDF
    Tato diplomová práce popisuje real-time zobrazovaní vlasů v 3D modelovacím programu Autodesk Maya. Zobrazovací modul je součást projektu Stubble - zasuvného modulu do programu Maya, který slouží k modelovaní vlasů. Prezentovaný algoritmus poskytuje vysoce kvalitní interaktivní náhled, pomocí kterého je možné modelovat vlasy bez nutnosti zdlouhavého vytváření náhledu v externím programu. Cílem je vytvořit takový náhled, který se bude co nejvíce podobat obrázkům, které produkuje 3Delight - zasuvný modul pro program Maya, který implementuje standardy zobrazovacího rozhraní RenderMan.This thesis describes a real-time hair rendering in 3D animation and modeling software Autodesk Maya. The renderer is part of the Stubble project a - Maya plug-in for hair modeling. The presented renderer provides a high-quality interactive preview that allows fast hair modeling without the need for rendering in slow off-line renderers. The goal of this work is to create a renderer that can generate images in real-time that are as close as possible to the output of the 3Delight renderer - a plug-in for Maya that is based on RenderMan standards.Department of Software and Computer Science EducationKatedra softwaru a výuky informatikyFaculty of Mathematics and PhysicsMatematicko-fyzikální fakult

    Representation and Three-Dimensional Interpretation of Image Texture: An Integrated Approach

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryAir Force Office of Scientific Research / AFOSR 87-0100Eastman Koda

    An investigation of hair modelling and rendering techniques with emphasis on African hairstyles

    Get PDF
    Many computer graphics applications make use of virtual humans. Methods for modelling and rendering hair are needed so that hairstyles can be added to the virtual humans. Modelling and rendering hair is challenging due to the large number of hair strands and their geometric properties, the complex lighting effects that occur among the strands of hair, and the complexity and large variation of human hairstyles. While methods have been developed for generating hair, no methods exist for generating African hair, which differs from hair of other ethnic groups. This thesis presents methods for modelling and rendering African hair. Existing hair modelling and rendering techniques are investigated, and the knowledge gained from the investigation is used to develop or enhance hair modelling and rendering techniques to produce three different forms of hair commonly found in African hairstyles. The different forms of hair identified are natural curly hair, straightened hair, and braids or twists of hair. The hair modelling techniques developed are implemented as plug-ins for the graphics program LightWave 3D. The plug-ins developed not only model the three identified forms of hair, but also add the modelled hair to a model of a head, and can be used to create a variety of African hairstyles. The plug-ins significantly reduce the time spent on hair modelling. Tests performed show that increasing the number of polygons used to model hair increases the quality of the hair produced, but also increases the rendering time. However, there is usually an upper bound to the number of polygons needed to produce a reasonable hairstyle, making it feasible to add African hairstyles to virtual humans. The rendering aspects investigated include hair illumination, texturing, shadowing and antialiasing. An anisotropic illumination model is developed that considers the properties of African hair, including the colouring, opacity and narrow width of the hair strands. Texturing is used in several instances to create the effect of individual strands of hair. Results show that texturing is useful for representing many hair strands because the density of the hair in a texture map does not have an effect on the rendering time. The importance of including a shadowing technique and applying an anti-aliasing method when rendering hair is demonstrated. The rendering techniques are implemented using the RenderMan Interface and Shading Language. A number of complete African hairstyles are shown, demonstrating that the techniques can be used to model and render African hair successfully.GNU Ghostscript 7.0
    • …
    corecore