2,591 research outputs found
Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D Light Field
In this paper, we address the problem of simultaneous relighting and novel
view synthesis of a complex scene from multi-view images with a limited number
of light sources. We propose an analysis-synthesis approach called Relit-NeuLF.
Following the recent neural 4D light field network (NeuLF), Relit-NeuLF first
leverages a two-plane light field representation to parameterize each ray in a
4D coordinate system, enabling efficient learning and inference. Then, we
recover the spatially-varying bidirectional reflectance distribution function
(SVBRDF) of a 3D scene in a self-supervised manner. A DecomposeNet learns to
map each ray to its SVBRDF components: albedo, normal, and roughness. Based on
the decomposed BRDF components and conditioning light directions, a RenderNet
learns to synthesize the color of the ray. To self-supervise the SVBRDF
decomposition, we encourage the predicted ray color to be close to the
physically-based rendering result using the microfacet model. Comprehensive
experiments demonstrate that the proposed method is efficient and effective on
both synthetic data and real-world human face data, and outperforms the
state-of-the-art results. We publicly released our code on GitHub. You can find
it here: https://github.com/oppo-us-research/RelitNeuLFComment: 10 page
Interactive high fidelity visualization of complex materials on the GPU
Documento submetido para revisão pelos pares. A publicar em Computers & Graphics. ISSN 0097-8493. 37:7 (nov. 2013) p. 809–819High fidelity interactive rendering is of major importance for footwear designers, since it allows experimenting with virtual prototypes of new products, rather than producing expensive physical mock-ups. This requires capturing the appearance of complex materials by resorting to image based approaches, such as the Bidirectional Texture Function (BTF), to allow subsequent interactive visualization, while still maintaining the capability to edit the materials' appearance. However, interactive global illumination rendering of compressed editable BTFs with ordinary computing resources remains to be demonstrated.
In this paper we demonstrate interactive global illumination by using a GPU ray tracing engine and the Sparse Parametric Mixture Model representation of BTFs, which is particularly well suited for BTF editing. We propose a rendering pipeline and data layout which allow for interactive frame rates and provide a scalability analysis with respect to the scene's complexity. We also include soft shadows from area light sources and approximate global illumination with ambient occlusion by resorting to progressive refinement, which quickly converges to an high quality image while maintaining interactive frame rates by limiting the number of rays shot per frame. Acceptable performance is also demonstrated under dynamic settings, including camera movements, changing lighting conditions and dynamic geometry.Work partially funded by QREN project nbr. 13114 TOPICShoe and by National Funds through the FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within projectPEst-OE/EEI/UI0752/2011
Capturing and Reconstructing the Appearance of Complex {3D} Scenes
In this thesis, we present our research on new acquisition methods for reflectance properties of real-world objects. Specifically, we first show a method for acquiring spatially varying densities in volumes of translucent, gaseous material with just a single image. This makes the method applicable to constantly changing phenomena like smoke without the use of high-speed camera equipment. Furthermore, we investigated how two well known techniques -- synthetic aperture confocal imaging and algorithmic descattering -- can be combined to help looking through a translucent medium like fog or murky water. We show that the depth at which we can still see an object embedded in the scattering medium is increased. In a related publication, we show how polarization and descattering based on phase-shifting can be combined for efficient 3D~scanning of translucent objects. Normally, subsurface scattering hinders the range estimation by offsetting the peak intensity beneath the surface away from the point of incidence. With our method, the subsurface scattering is reduced to a minimum and therefore reliable 3D~scanning is made possible. Finally, we present a system which recovers surface geometry, reflectance properties of opaque objects, and prevailing lighting conditions at the time of image capture from just a small number of input photographs. While there exist previous approaches to recover reflectance properties, our system is the first to work on images taken under almost arbitrary, changing lighting conditions. This enables us to use images we took from a community photo collection website
Scene relighting and editing for improved object insertion
Abstract. The goal of this thesis is to develop a scene relighting and object insertion pipeline using Neural Radiance Fields (NeRF) to incorporate one or more objects into an outdoor environment scene. The output is a 3D mesh that embodies decomposed bidirectional reflectance distribution function (BRDF) characteristics, which interact with varying light source positions and strengths. To achieve this objective, the thesis is divided into two sub-tasks.
The first sub-task involves extracting visual information about the outdoor environment from a sparse set of corresponding images. A neural representation is constructed, providing a comprehensive understanding of the constituent elements, such as materials, geometry, illumination, and shadows. The second sub-task involves generating a neural representation of the inserted object using either real-world images or synthetic data.
To accomplish these objectives, the thesis draws on existing literature in computer vision and computer graphics. Different approaches are assessed to identify their advantages and disadvantages, with detailed descriptions of the chosen techniques provided, highlighting their functioning to produce the ultimate outcome.
Overall, this thesis aims to provide a framework for compositing and relighting that is grounded in NeRF and allows for the seamless integration of objects into outdoor environments. The outcome of this work has potential applications in various domains, such as visual effects, gaming, and virtual reality
- …