126 research outputs found

    Learning Object-Centric Neural Scattering Functions for Free-viewpoint Relighting and Scene Composition

    Full text link
    Photorealistic object appearance modeling from 2D images is a constant topic in vision and graphics. While neural implicit methods (such as Neural Radiance Fields) have shown high-fidelity view synthesis results, they cannot relight the captured objects. More recent neural inverse rendering approaches have enabled object relighting, but they represent surface properties as simple BRDFs, and therefore cannot handle translucent objects. We propose Object-Centric Neural Scattering Functions (OSFs) for learning to reconstruct object appearance from only images. OSFs not only support free-viewpoint object relighting, but also can model both opaque and translucent objects. While accurately modeling subsurface light transport for translucent objects can be highly complex and even intractable for neural methods, OSFs learn to approximate the radiance transfer from a distant light to an outgoing direction at any spatial location. This approximation avoids explicitly modeling complex subsurface scattering, making learning a neural implicit model tractable. Experiments on real and synthetic data show that OSFs accurately reconstruct appearances for both opaque and translucent objects, allowing faithful free-viewpoint relighting as well as scene composition. Project website: https://kovenyu.com/osf/Comment: Project website: https://kovenyu.com/osf/ Journal extension of arXiv:2012.08503. The first two authors contributed equally to this wor

    Interactive Rendering of Scattering and Refraction Effects in Heterogeneous Media

    Get PDF
    In this dissertation we investigate the problem of interactive and real-time visualization of single scattering, multiple scattering and refraction effects in heterogeneous volumes. Our proposed solutions span a variety of use scenarios: from a very fast yet physically-based approximation to a physically accurate simulation of microscopic light transmission. We add to the state of the art by introducing a novel precomputation and sampling strategy, a system for efficiently parallelizing the computation of different volumetric effects, and a new and fast version of the Discrete Ordinates Method. Finally, we also present a collateral work on real-time 3D acquisition devices

    Visual Prototyping of Cloth

    Get PDF
    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture appearance models of cloth, especially when considering computer aided design of cloth. Previous methods can be used to produce highly realistic images, however, possibilities for cloth-editing are either restricted or require the measurement of large material databases to capture all variations of cloth samples. We propose a pipeline for designing the appearance of cloth directly based on those elements that can be changed within the production process. These are optical properties of fibers, geometrical properties of yarns and compositional elements such as weave patterns. We introduce a geometric yarn model, integrating state-of-the-art textile research. We further present an approach to reverse engineer cloth and estimate parameters for a procedural cloth model from single images. This includes the automatic estimation of yarn paths, yarn widths, their variation and a weave pattern. We demonstrate that we are able to match the appearance of original cloth samples in an input photograph for several examples. Parameters of our model are fully editable, enabling intuitive appearance design. Unfortunately, such explicit fiber-based models can only be used to render small cloth samples, due to large storage requirements. Recently, bidirectional texture functions (BTFs) have become popular for efficient photo-realistic rendering of materials. We present a rendering approach combining the strength of a procedural model of micro-geometry with the efficiency of BTFs. We propose a method for the computation of synthetic BTFs using Monte Carlo path tracing of micro-geometry. We observe that BTFs usually consist of many similar apparent bidirectional reflectance distribution functions (ABRDFs). By exploiting structural self-similarity, we can reduce rendering times by one order of magnitude. This is done in a process we call non-local image reconstruction, which has been inspired by non-local means filtering. Our results indicate that synthesizing BTFs is highly practical and may currently only take a few minutes for small BTFs. We finally propose a novel and general approach to physically accurate rendering of large cloth samples. By using a statistical volumetric model, approximating the distribution of yarn fibers, a prohibitively costly, explicit geometric representation is avoided. As a result, accurate rendering of even large pieces of fabrics becomes practical without sacrificing much generality compared to fiber-based techniques

    The delta radiance field

    Get PDF
    The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now

    An Artistic Approach for Intuitive Control of Light Transfer in Participating Media

    Get PDF
    The sole purpose of every form of visual representation is to make something look believable. Even among abstract or conceptual representation, the purpose is to create something that within the defined visual language the audience will consider believable and accepted. In the field of computer generated representation there are numerous visual languages that have been developed throughout the years, attempting to solve different visualization or artistic problems. This thesis presents an alternative light transfer model for participating media focused on the intuitive control of the illumination data and the artistic value of the resulting image. The purpose is not focused on accurately modeling lights physical behavior and its interaction with the surfaces and elements. My thesis describes an artistic approach which aims to offer an organic and intuitive control of the glow and temperature of the effects of participating media and direct the value and hues through the surfaces. The system described in the thesis approximates light transfer through a given volume by calculating light contribution in the volume with discreet sampling and subsequently gathering these values to determine the diffuse scattering contribution for the volume. I will also discuss the assumptions made to allow such approximations, as well as how the intuitive control offered by the approach and these approximations allow new forms or representation and artistic direction

    Towards Interactive Photorealistic Rendering

    Get PDF

    Photo-Realistic Rendering of Fiber Assemblies

    Get PDF
    In this thesis we introduce a novel uniform formalism for light scattering from filaments, the Bidirectional Fiber Scattering Distribution Function (BFSDF). Similar to the role of the Bidirectional Surface Scattering Reflectance Distribution Function (BSSRDF) for surfaces, the BFSDF can be seen as a general approach for describing light scattering from filaments. Based on this theoretical foundation, approximations for various levels of abstraction are derived allowing for efficient and accurate rendering of fiber assemblies, such as hair or fur. In this context novel rendering techniques accounting for all prominent effects of local and global illumination are presented. Moreover, physically-based analytical BFSDF models for human hair and other kinds of fibers are derived. Finally, using the model for human hair we make a first step towards image-based BFSDF reconstruction, where optical properties of a single strand are estimated from "synthetic photographs" (renderings) a full hairstyle

    State of the Art on Neural Rendering

    Get PDF
    Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. This state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems

    Fast photorealistic techniques to simulate global illumination in videogames and virtual environments

    Get PDF
    Per al càlcul de la il·luminació global per a la síntesi d'imatges d'escenaris virtuals s'usen mètodes físicament acurats com a radiositat o el ray-tracing. Aquests mètodes són molt potents i capaços de generar imatges de gran realisme, però són molt costosos. A aquesta tesi presenta algunes tècniques per simular i/o accelerar el càlcul de la il·luminació global. La tècnica de les obscurances es basa en la suposició que com més amagat és un punt a l'escena, més fosc s'ha de veure. Es calcula analitzant l'entorn geomètric del punt i ens dóna un valor per a la seva il·luminació indirecta, que no és físicament acurat, però sí aparentment realista.Aquesta tècnica es millora per a entorns en temps real com els videojocs. S'aplica també a entorns de ray-tracing per a la generació d'imatges realistes. En aquest context, el càlcul de seqüències de frames per a l'animació de llums i càmeres s'accelera enormement reusant informació entre frames.Les obscurances serveixen per a simular la il·luminació indirecta d'una escena. La llum directa es calcula apart i de manera independent. El desacoblament de la llum directa i la indirecta és una gran avantatge, i en treurem profit. Podem afegir fàcilment l'efecte de coloració entre objectes sense afegir temps de càlcul. Una altra avantatge és que per calcular les obscurances només hem d'analitzar un entorn limitat al voltant del punt.Per escenes virtuals difuses, la radiositat es pot precalcular i l'escena es pot navegar amb apariència realista, però si un objecte de l'escena es mou en un entorn dinàmic en temps real, com un videojoc, el recàlcul de la il·luminació global de l'escena és prohibitiu. Com les obscurances es calculen en un entorn limitat, es poden recalcular en temps real per a l'entorn de l'objecte que es mou a cada frame i encara aconseguir temps real.A més, podem fer servir les obscurances per a calcular imatges de gran qualitat, o per seqüències d'imatges per una animació, com en el ray-tracing. Això ens permet tractar materials no difusos i investigar l'ús de tècniques normalment difuses com les obscurances en entorns generals. Quan la càmera està estàtica, l'ús d'animació de llum només afecta la il·luminació directa, i si usem obscurances per a la llum indirecta, gràcies al seu desacoblament, el càlcul de sèries de frames per a una animació és molt ràpid. El següent pas és afegir animació de càmera, reusant els valors de les obscurances entre frames. Aquesta última tècnica de reús d'informació de la il·luminació del punt d'impacte entre frames la podem usar per a tècniques acurades d'il·luminació global com el path-tracing, i nosaltres estudiem com reusar aquesta informació de manera no esbiaixada. A més, estudiem diferents tècniques de mostreig per a la semi-esfera, i les obscurances es calculen amb una nova tècnica, aplicant depth peeling amb GPU.To compute global illumination solutions for rendering virtual scenes, physically accurate methods based on radiosity or ray-tracing are usually employed. These methods, though powerful and capable of generating images with high realism, are very costly. In this thesis, some techniques to simulate and/or accelerate the computation of global illumination are studied. The obscurances technique is based on the supposition that the more occluded is a point in the scene, the darker it will appear. It is computed by analyzing the geometric environment of the point and gives a value for the indirect illumination for the point that is, though not physically accurate, visually realistic. This technique is enhanced and improved in real-time environments as videogames. It is also applied to ray-tracing frameworks to generate realistic images. In this last context, sequences of frames for animation of lights and cameras are dramatically accelerated by reusing information between frames.The obscurances are computed to simulate the indirect illumination of a scene. The direct lighting is computed apart and in an independent way. The decoupling of direct and indirect lighting is a big advantage, and we will take profit from this. We can easily add color bleeding effects without adding computation time. Another advantage is that to compute the obscurances we only need to analyze a limited environment around the point. For diffuse virtual scenes, the radiosity can be precomputed and we can navigate the scene with a realistic appearance. But when a small object moves in a dynamic real-time virtual environment, as a videogame, the recomputation of the global illumination of the scene is prohibitive. Thanks to the limited reach of the obscurance computation, we can recompute the obscurances only for the limited environment of the moving object for every frame and still have real-time frame rates. Obscurances can also be used to compute high quality images, or sequences of images for an animation, in a ray-tracing-like. This allows us to deal with non-diffuse materials and to research the use of a commonly diffuse technique as obscurances in general environments. For static cameras, using light animation only affects to direct lighting, and if we use obscurances for the indirect lighting, thanks to the decoupling of direct and indirect illumination, the computation of a series of frames for the animation is very fast. The next step is to add camera animation, reusing the obscurances results between frames. Using this last technique of reusing the illumination of the hit points between frames for a true global illumination technique as path tracing, we study how we can reuse this information in an unbiased way. Besides, a study of different sampling techniques for the hemisphere is made, obscurances are computed with the depth-peeling technique and using GPU
    • …
    corecore