4,077 research outputs found
Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials
Accurate color reproduction is important in many applications of 3D printing,
from design prototypes to 3D color copies or portraits. Although full color is
available via other technologies, multi-jet printers have greater potential for
graphical 3D printing, in terms of reproducing complex appearance properties.
However, to date these printers cannot produce full color, and doing so poses
substantial technical challenges, from the shear amount of data to the
translucency of the available color materials. In this paper, we propose an
error diffusion halftoning approach to achieve full color with multi-jet
printers, which operates on multiple isosurfaces or layers within the object.
We propose a novel traversal algorithm for voxel surfaces, which allows the
transfer of existing error diffusion algorithms from 2D printing. The resulting
prints faithfully reproduce colors, color gradients and fine-scale details.Comment: 15 pages, 14 figures; includes supplemental figure
Surface Curvature Effects on Reflectance from Translucent Materials
Most of the physically based techniques for rendering translucent objects use
the diffusion theory of light scattering in turbid media. The widely used
dipole diffusion model (Jensen et al. 2001) applies the diffusion-theory
formula derived for a planar interface to objects of arbitrary shapes. This
paper presents first results of our investigation of how surface curvature
affects the diffuse reflectance from translucent materials.Comment: 10 pages, 2 figures. The first version of this paper was published in
the Communication Papers Proceedings of 18th International Conference on
Computer Graphics, Visualization and Computer Vision 2010 - WSCG201
Single-shot layered reflectance separation using a polarized light field camera
We present a novel computational photography technique for single shot separation of diffuse/specular reflectance as well as novel angular domain separation of layered reflectance. Our solution consists of a two-way polarized light field (TPLF) camera which simultaneously captures two orthogonal states of polarization. A single photograph of a subject acquired with the TPLF camera under polarized illumination then enables standard separation of diffuse (depolarizing) and polarization preserving specular reflectance using light field sampling. We further demonstrate that the acquired data also enables novel angular separation of layered reflectance including separation of specular reflectance and single scattering in the polarization preserving component, and separation of shallow scattering from deep scattering in the depolarizing component. We apply our approach for efficient acquisition of facial reflectance including diffuse and specular normal maps, and novel separation of photometric normals into layered reflectance normals for layered facial renderings. We demonstrate our proposed single shot layered reflectance separation to be comparable to an existing multi-shot technique that relies on structured lighting while achieving separation results under a variety of illumination conditions
A Dual-Beam Method-of-Images 3D Searchlight BSSRDF
We present a novel BSSRDF for rendering translucent materials. Angular
effects lacking in previous BSSRDF models are incorporated by using a dual-beam
formulation. We employ a Placzek's Lemma interpretation of the method of images
and discard diffusion theory. Instead, we derive a plane-parallel
transformation of the BSSRDF to form the associated BRDF and optimize the image
confiurations such that the BRDF is close to the known analytic solutions for
the associated albedo problem. This ensures reciprocity, accurate colors, and
provides an automatic level-of-detail transition for translucent objects that
appear at various distances in an image. Despite optimizing the subsurface
fluence in a plane-parallel setting, we find that this also leads to fairly
accurate fluence distributions throughout the volume in the original 3D
searchlight problem. Our method-of-images modifications can also improve the
accuracy of previous BSSRDFs.Comment: added clarifying text and 1 figure to illustrate the metho
BSSRDF estimation from single images
We present a novel method to estimate an approximation of the reflectance characteristics of optically thick, homogeneous translucent materials using only a single photograph as input. First, we approximate the diffusion profile as a linear combination of piecewise constant functions, an approach that enables a linear system minimization and maximizes robustness in the presence of suboptimal input data inferred from the image. We then fit to a smoother monotonically decreasing model, ensuring continuity on its first derivative. We show the feasibility of our approach and validate it in controlled environments, comparing well against physical measurements from previous works. Next, we explore the performance of our method in uncontrolled scenarios, where neither lighting nor geometry are known. We show that these can be roughly approximated from the corresponding image by making two simple assumptions: that the object is lit by a distant light source and that it is globally convex, allowing us to capture the visual appearance of the photographed material. Compared with previous works, our technique offers an attractive balance between visual accuracy and ease of use, allowing its use in a wide range of scenarios including off-the-shelf, single images, thus extending the current repertoire of real-world data acquisition techniques
The Impact of Surface Normals on Appearance
The appearance of an object is the result of complex light interaction with the object. Beyond the basic interplay between incident light and the object\u27s material, a multitude of physical events occur between this illumination and the microgeometry at the point of incidence, and also beneath the surface. A given object, made as smooth and opaque as possible, will have a completely different appearance if either one of these attributes - amount of surface mesostructure (small-scale surface orientation) or translucency - is altered. Indeed, while they are not always readily perceptible, the small-scale features of an object are as important to its appearance as its material properties. Moreover, surface mesostructure and translucency are inextricably linked in an overall effect on appearance. In this dissertation, we present several studies examining the importance of surface mesostructure (small-scale surface orientation) and translucency on an object\u27s appearance. First, we present an empirical study that establishes how poorly a mesostructure estimation technique can perform when translucent objects are used as input. We investigate the two major factors in determining an object\u27s translucency: mean free path and scattering albedo. We exhaustively vary the settings of these parameters within realistic bounds, examining the subsequent blurring effect on the output of a common shape estimation technique, photometric stereo. Based on our findings, we identify a dramatic effect that the input of a translucent material has on the quality of the resultant estimated mesostructure. In the next project, we discuss an optimization technique for both refining estimated surface orientation of translucent objects and determining the reflectance characteristics of the underlying material. For a globally planar object, we use simulation and real measurements to show that the blurring effect on normals that was observed in the previous study can be recovered. The key to this is the observation that the normalization factor for recovered normals is proportional to the error on the accuracy of the blur kernel created from estimated translucency parameters. Finally, we frame the study of the impact of surface normals in a practical, image-based context. We discuss our low-overhead, editing tool for natural images that enables the user to edit surface mesostructure while the system automatically updates the appearance in the natural image. Because a single photograph captures an instant of the incredibly complex interaction of light and an object, there is a wealth of information to extract from a photograph. Given a photograph of an object in natural lighting, we allow mesostructure edits and infer any missing reflectance information in a realistically plausible way
GenPluSSS: A Genetic Algorithm Based Plugin for Measured Subsurface Scattering Representation
This paper presents a plugin that adds a representation of homogeneous and
heterogeneous, optically thick, translucent materials on the Blender 3D
modeling tool. The working principle of this plugin is based on a combination
of Genetic Algorithm (GA) and Singular Value Decomposition (SVD)-based
subsurface scattering method (GenSSS). The proposed plugin has been implemented
using Mitsuba renderer, which is an open source rendering software. The
proposed plugin has been validated on measured subsurface scattering data. It's
shown that the proposed plugin visualizes homogeneous and heterogeneous
subsurface scattering effects, accurately, compactly and computationally
efficiently
A Biophysically-Based Model of the Optical Properties of Skin Aging
This paper presents a time-varying, multi-layered biophysically-based model of the optical properties of human skin, suitable for simulating appearance changes due to aging. We have identified the key aspects that cause such changes, both in terms of the structure of skin and its chromophore concentrations, and rely on the extensive medical and optical tissue literature for accurate data. Our model can be expressed in terms of biophysical parameters, optical parameters commonly used in graphics and rendering (such as spectral absorption and scattering coefficients), or more intuitively with higher-level parameters such as age, gender, skin care or skin type. It can be used with any rendering algorithm that uses diffusion profiles, and it allows to automatically simulate different types of skin at different stages of aging, avoiding the need for artistic input or costly capture processes
AirCode: Unobtrusive Physical Tags for Digital Fabrication
We present AirCode, a technique that allows the user to tag physically
fabricated objects with given information. An AirCode tag consists of a group
of carefully designed air pockets placed beneath the object surface. These air
pockets are easily produced during the fabrication process of the object,
without any additional material or postprocessing. Meanwhile, the air pockets
affect only the scattering light transport under the surface, and thus are hard
to notice to our naked eyes. But, by using a computational imaging method, the
tags become detectable. We present a tool that automates the design of air
pockets for the user to encode information. AirCode system also allows the user
to retrieve the information from captured images via a robust decoding
algorithm. We demonstrate our tagging technique with applications for metadata
embedding, robotic grasping, as well as conveying object affordances.Comment: ACM UIST 2017 Technical Paper
Analysis of light transport in scattering media
We propose a new method to analyze light transport in homogeneous scattering media. The incident light undergoes multiple bounces in translucent objects, and produces a complex light field. Our method analyzes the light transport in two steps. First, single and multiple scattering are separated by projecting high-frequency stripe patterns. Then, multiple scattering is decomposed into each bounce component based on the light transport equation. The light field for each bounce is recursively estimated. Experimental results show that light transport in scattering media can be decomposed and visualized for each bounce.Microsoft Researc
- …