977 research outputs found

    A Dual-Beam Method-of-Images 3D Searchlight BSSRDF

    Full text link
    We present a novel BSSRDF for rendering translucent materials. Angular effects lacking in previous BSSRDF models are incorporated by using a dual-beam formulation. We employ a Placzek's Lemma interpretation of the method of images and discard diffusion theory. Instead, we derive a plane-parallel transformation of the BSSRDF to form the associated BRDF and optimize the image confiurations such that the BRDF is close to the known analytic solutions for the associated albedo problem. This ensures reciprocity, accurate colors, and provides an automatic level-of-detail transition for translucent objects that appear at various distances in an image. Despite optimizing the subsurface fluence in a plane-parallel setting, we find that this also leads to fairly accurate fluence distributions throughout the volume in the original 3D searchlight problem. Our method-of-images modifications can also improve the accuracy of previous BSSRDFs.Comment: added clarifying text and 1 figure to illustrate the metho

    Surface Curvature Effects on Reflectance from Translucent Materials

    Get PDF
    Most of the physically based techniques for rendering translucent objects use the diffusion theory of light scattering in turbid media. The widely used dipole diffusion model (Jensen et al. 2001) applies the diffusion-theory formula derived for a planar interface to objects of arbitrary shapes. This paper presents first results of our investigation of how surface curvature affects the diffuse reflectance from translucent materials.Comment: 10 pages, 2 figures. The first version of this paper was published in the Communication Papers Proceedings of 18th International Conference on Computer Graphics, Visualization and Computer Vision 2010 - WSCG201

    Perception based heterogeneous subsurface scattering for film

    Get PDF
    Many real world materials exhibit complex subsurface scattering of light. This internal light interaction creates the perception of translucency for the human visual system. Translucent materials and simulation of the subsurface scattering of light has become an expected necessity for generating warmth and realism in computer generated imagery. The light transport within heterogenous materials, such as marble, has proved challenging to model and render. The current material models available to digital artists have been limited to homogeneous subsurface scattering despite a few publications documenting success at simulating heterogeneous light transport. While the publications successfully simulate this complex phenomenon, the material descriptions have been highly specialized and far from intuitive. By combining the measurable properties of heterogeneous translucent materials with the defining properties of translucency, as perceived by the human visual system, a description of heterogeneous translucent materials that is suitable for artist use in a film production pipeline can be achieved. Development of the material description focuses on integration with the film pipeline, ease of use, and reasonable approximation of heterogeneous translucency based on perception. Methods of material manipulation are explored to determine which properties should be modifiable by artists while maintaining the perception of heterogenous translucency

    Separable Subsurface Scattering

    Get PDF
    In this paper, we propose two real-time models for simulating subsurface scattering for a large variety of translucent materials, which need under 0.5 ms per frame to execute. This makes them a practical option for real-time production scenarios. Current state-of-the-art, real-time approaches simulate subsurface light transport by approximating the radially symmetric non-separable diffusion kernel with a sum of separable Gaussians, which requires multiple (up to 12) 1D convolutions. In this work we relax the requirement of radial symmetry to approximate a 2D diffuse reflectance profile by a single separable kernel. We first show that low-rank approximations based on matrix factorization outperform previous approaches, but they still need several passes to get good results. To solve this, we present two different separable models: the first one yields a high-quality diffusion simulation, while the second one offers an attractive trade-off between physical accuracy and artistic control. Both allow rendering of subsurface scattering using only two 1D convolutions, reducing both execution time and memory consumption, while delivering results comparable to techniques with higher cost. Using our importance-sampling and jittering strategies, only seven samples per pixel are required. Our methods can be implemented as simple post-processing steps without intrusive changes to existing rendering pipelines

    Directional Dipole Model for Subsurface Scattering

    Get PDF

    Flux-Limited Diffusion for Multiple Scattering in Participating Media

    Full text link
    For the rendering of multiple scattering effects in participating media, methods based on the diffusion approximation are an extremely efficient alternative to Monte Carlo path tracing. However, in sufficiently transparent regions, classical diffusion approximation suffers from non-physical radiative fluxes which leads to a poor match to correct light transport. In particular, this prevents the application of classical diffusion approximation to heterogeneous media, where opaque material is embedded within transparent regions. To address this limitation, we introduce flux-limited diffusion, a technique from the astrophysics domain. This method provides a better approximation to light transport than classical diffusion approximation, particularly when applied to heterogeneous media, and hence broadens the applicability of diffusion-based techniques. We provide an algorithm for flux-limited diffusion, which is validated using the transport theory for a point light source in an infinite homogeneous medium. We further demonstrate that our implementation of flux-limited diffusion produces more accurate renderings of multiple scattering in various heterogeneous datasets than classical diffusion approximation, by comparing both methods to ground truth renderings obtained via volumetric path tracing.Comment: Accepted in Computer Graphics Foru

    Real-Time Realistic Skin Translucency

    Full text link

    The Impact of Surface Normals on Appearance

    Get PDF
    The appearance of an object is the result of complex light interaction with the object. Beyond the basic interplay between incident light and the object\u27s material, a multitude of physical events occur between this illumination and the microgeometry at the point of incidence, and also beneath the surface. A given object, made as smooth and opaque as possible, will have a completely different appearance if either one of these attributes - amount of surface mesostructure (small-scale surface orientation) or translucency - is altered. Indeed, while they are not always readily perceptible, the small-scale features of an object are as important to its appearance as its material properties. Moreover, surface mesostructure and translucency are inextricably linked in an overall effect on appearance. In this dissertation, we present several studies examining the importance of surface mesostructure (small-scale surface orientation) and translucency on an object\u27s appearance. First, we present an empirical study that establishes how poorly a mesostructure estimation technique can perform when translucent objects are used as input. We investigate the two major factors in determining an object\u27s translucency: mean free path and scattering albedo. We exhaustively vary the settings of these parameters within realistic bounds, examining the subsequent blurring effect on the output of a common shape estimation technique, photometric stereo. Based on our findings, we identify a dramatic effect that the input of a translucent material has on the quality of the resultant estimated mesostructure. In the next project, we discuss an optimization technique for both refining estimated surface orientation of translucent objects and determining the reflectance characteristics of the underlying material. For a globally planar object, we use simulation and real measurements to show that the blurring effect on normals that was observed in the previous study can be recovered. The key to this is the observation that the normalization factor for recovered normals is proportional to the error on the accuracy of the blur kernel created from estimated translucency parameters. Finally, we frame the study of the impact of surface normals in a practical, image-based context. We discuss our low-overhead, editing tool for natural images that enables the user to edit surface mesostructure while the system automatically updates the appearance in the natural image. Because a single photograph captures an instant of the incredibly complex interaction of light and an object, there is a wealth of information to extract from a photograph. Given a photograph of an object in natural lighting, we allow mesostructure edits and infer any missing reflectance information in a realistically plausible way

    Capturing and Reconstructing the Appearance of Complex {3D} Scenes

    No full text
    In this thesis, we present our research on new acquisition methods for reflectance properties of real-world objects. Specifically, we first show a method for acquiring spatially varying densities in volumes of translucent, gaseous material with just a single image. This makes the method applicable to constantly changing phenomena like smoke without the use of high-speed camera equipment. Furthermore, we investigated how two well known techniques -- synthetic aperture confocal imaging and algorithmic descattering -- can be combined to help looking through a translucent medium like fog or murky water. We show that the depth at which we can still see an object embedded in the scattering medium is increased. In a related publication, we show how polarization and descattering based on phase-shifting can be combined for efficient 3D~scanning of translucent objects. Normally, subsurface scattering hinders the range estimation by offsetting the peak intensity beneath the surface away from the point of incidence. With our method, the subsurface scattering is reduced to a minimum and therefore reliable 3D~scanning is made possible. Finally, we present a system which recovers surface geometry, reflectance properties of opaque objects, and prevailing lighting conditions at the time of image capture from just a small number of input photographs. While there exist previous approaches to recover reflectance properties, our system is the first to work on images taken under almost arbitrary, changing lighting conditions. This enables us to use images we took from a community photo collection website
    corecore