3,385 research outputs found

    A framework for real-time physically-based hair rendering

    Get PDF
    Hair rendering has been a major challenge in computer graphics for several years due to the complex light interactions involved. Complexity mainly stems from two aspects: the number of hair strands, and the resulting complexity of their interaction with light. In general, theoretical approaches towards a realistic hair visualization aim to develop a proper scattering model on a per-strand level, which can be extended in practice to the whole hair volume with ray tracing even though it is usually expensive in computational terms. Aiming at achieving real-time hair rendering, I analyze each component contributing to it from both theoretical and practical points of view in this work. Most approaches, both real- and non-real-time build on top of the Marschner scattering model, such as recent efficient state-of-the-art techniques introduced in Unreal Engine and Frostbite, among others. Interactive applications cannot afford the complexity of ray tracing, and they target efficiency by explicitly dealing with each component involved in both single-strand and inter-strand light interactions, applying the necessary simplifications to match the time budget. I have further implemented a framework, separating the different components, which combines aspects of these approaches towards the best possible quality and performance. The implementation achieves real-time good-looking hair, and its flexibility has allowed to perform experiments on performance, scalability, and contribution to quality of the different components

    GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians

    Full text link
    Hairstyle reflects culture and ethnicity at first glance. In the digital era, various realistic human hairstyles are also critical to high-fidelity digital human assets for beauty and inclusivity. Yet, realistic hair modeling and real-time rendering for animation is a formidable challenge due to its sheer number of strands, complicated structures of geometry, and sophisticated interaction with light. This paper presents GaussianHair, a novel explicit hair representation. It enables comprehensive modeling of hair geometry and appearance from images, fostering innovative illumination effects and dynamic animation capabilities. At the heart of GaussianHair is the novel concept of representing each hair strand as a sequence of connected cylindrical 3D Gaussian primitives. This approach not only retains the hair's geometric structure and appearance but also allows for efficient rasterization onto a 2D image plane, facilitating differentiable volumetric rendering. We further enhance this model with the "GaussianHair Scattering Model", adept at recreating the slender structure of hair strands and accurately capturing their local diffuse color in uniform lighting. Through extensive experiments, we substantiate that GaussianHair achieves breakthroughs in both geometric and appearance fidelity, transcending the limitations encountered in state-of-the-art methods for hair reconstruction. Beyond representation, GaussianHair extends to support editing, relighting, and dynamic rendering of hair, offering seamless integration with conventional CG pipeline workflows. Complementing these advancements, we have compiled an extensive dataset of real human hair, each with meticulously detailed strand geometry, to propel further research in this field

    Photo-Realistic Rendering of Fiber Assemblies

    Get PDF
    In this thesis we introduce a novel uniform formalism for light scattering from filaments, the Bidirectional Fiber Scattering Distribution Function (BFSDF). Similar to the role of the Bidirectional Surface Scattering Reflectance Distribution Function (BSSRDF) for surfaces, the BFSDF can be seen as a general approach for describing light scattering from filaments. Based on this theoretical foundation, approximations for various levels of abstraction are derived allowing for efficient and accurate rendering of fiber assemblies, such as hair or fur. In this context novel rendering techniques accounting for all prominent effects of local and global illumination are presented. Moreover, physically-based analytical BFSDF models for human hair and other kinds of fibers are derived. Finally, using the model for human hair we make a first step towards image-based BFSDF reconstruction, where optical properties of a single strand are estimated from "synthetic photographs" (renderings) a full hairstyle

    Relightable Neural Assets

    Full text link
    High-fidelity 3D assets with materials composed of fibers (including hair), complex layered material shaders, or fine scattering geometry are ubiquitous in high-end realistic rendering applications. Rendering such models is computationally expensive due to heavy shaders and long scattering paths. Moreover, implementing the shading and scattering models is non-trivial and has to be done not only in the 3D content authoring software (which is necessarily complex), but also in all downstream rendering solutions. For example, web and mobile viewers for complex 3D assets are desirable, but frequently cannot support the full shading complexity allowed by the authoring application. Our goal is to design a neural representation for 3D assets with complex shading that supports full relightability and full integration into existing renderers. We provide an end-to-end shading solution at the first intersection of a ray with the underlying geometry. All shading and scattering is precomputed and included in the neural asset; no multiple scattering paths need to be traced, and no complex shading models need to be implemented to render our assets, beyond a single neural architecture. We combine an MLP decoder with a feature grid. Shading consists of querying a feature vector, followed by an MLP evaluation producing the final reflectance value. Our method provides high-fidelity shading, close to the ground-truth Monte Carlo estimate even at close-up views. We believe our neural assets could be used in practical renderers, providing significant speed-ups and simplifying renderer implementations

    Interactive translucent volume rendering and procedural modeling

    Get PDF
    Journal ArticleDirect volume rendering is a commonly used technique in visualization applications. Many of these applications require sophisticated shading models to capture subtle lighting effects and characteristics of volume metric data and materials. Many common objects and natural phenomena exhibit visual quality that cannot be captured using simple lighting models or cannot be solved at interactive rates using more sophisticated methods. We present a simple yet effective interactive shading model which captures volumetric light attenuation effects to produce volumetric shadows and the subtle appearance of translucency. We also present a technique for volume displacement or perturbation that allows realistic interactive modeling of high frequency detail for real and synthetic volumetric data

    AirCode: Unobtrusive Physical Tags for Digital Fabrication

    Full text link
    We present AirCode, a technique that allows the user to tag physically fabricated objects with given information. An AirCode tag consists of a group of carefully designed air pockets placed beneath the object surface. These air pockets are easily produced during the fabrication process of the object, without any additional material or postprocessing. Meanwhile, the air pockets affect only the scattering light transport under the surface, and thus are hard to notice to our naked eyes. But, by using a computational imaging method, the tags become detectable. We present a tool that automates the design of air pockets for the user to encode information. AirCode system also allows the user to retrieve the information from captured images via a robust decoding algorithm. We demonstrate our tagging technique with applications for metadata embedding, robotic grasping, as well as conveying object affordances.Comment: ACM UIST 2017 Technical Paper

    A Multi-scale Yarn Appearance Model with Fiber Details

    Full text link
    Rendering realistic cloth has always been a challenge due to its intricate structure. Cloth is made up of fibers, plies, and yarns, and previous curved-based models, while detailed, were computationally expensive and inflexible for large cloth. To address this, we propose a simplified approach. We introduce a geometric aggregation technique that reduces ray-tracing computation by using fewer curves, focusing only on yarn curves. Our model generates ply and fiber shapes implicitly, compensating for the lack of explicit geometry with a novel shadowing component. We also present a shading model that simplifies light interactions among fibers by categorizing them into four components, accurately capturing specular and scattered light in both forward and backward directions. To render large cloth efficiently, we propose a multi-scale solution based on pixel coverage. Our yarn shading model outperforms previous methods, achieving rendering speeds 3-5 times faster with less memory in near-field views. Additionally, our multi-scale solution offers a 20% speed boost for distant cloth observation
    corecore