29 research outputs found
Isotropic and Anisotropic Interfaced Lambertian Microfacets
Specular microfacet distributions have been successfully employed by many authors for representing materials glossiness, and they are generally combined with a Lambertian term that accounts for the colored aspect. Such a representation makes use of the Fresnel reflectance factor at the interface, but the transmission factor is often ignored. In addition, the generalization to microfacet distributions with a more general reflectance is known to be complex, since it requires to solve an angular integral that has no analytical solution. This paper proposes a complete framework for physically handling both reflection and transmission with microfacet distributions. First, we show how transmission affects reflectance of an interfaced Lambertian model, and provide an analytical description of an individual microfacet reflectance. Second, we describe a method for handling distributions of such microfacets in any physically based Monte-Carlo rendering systems. Our approach generalizes several previous models, including flat Lambertian materials as well as specular and Lambertian microfacets. The result section illustrates the wide range of materials that can be possibly taken into account with this representation
Modeling and Measurements of the Bidirectional Reflectance of Microrough Silicon Surfaces
Bidirectional reflectance is a fundamental radiative property of rough surfaces. Knowledge of the bidirectional reflectance is crucial to the emissivity modeling and heat transfer analysis. This thesis concentrates on the modeling and measurements of the bidirectional reflectance for microrough silicon surfaces and on the validity of a hybrid method in the modeling of the bidirectional reflectance for thin-film coated rough surfaces.
The surface topography and the bidirectional reflectance distribution function (BRDF) of the rough side of several silicon wafers have been extensively characterized using an atomic force microscope and a laser scatterometer, respectively. The slope distribution calculated from the surface topographic data deviates from the Gaussian distribution. Both nearly isotropic and strongly anisotropic features are observed in the two-dimensional (2-D) slope distributions and in the measured BRDF for more than one sample. The 2-D slope distribution is used in a geometric-optics based model to predict the BRDF, which agrees reasonably well with the measured values. The side peaks in the slope distribution and the subsidiary peaks in the BRDF for two anisotropic samples are attributed to the formation of {311} planes during chemical etching. The correlation between the 2-D slope distribution and the BRDF has been developed.
A boundary integral method is applied to simulate the bidirectional reflectance of thin-film coatings on rough substrates. The roughness of the substrate is one dimensional for simplification. The result is compared to that from a hybrid method which uses the geometric optics approximation to model the roughness effect and the thin-film optics to consider the interference due to the coating. The effects of the film thickness and the substrate roughness on the validity of the hybrid method have been investigated. The validity regime of the hybrid method is established for silicon dioxide films on silicon substrates in the visible wavelength range.
The proposed method to characterize the microfacet orientation and to predict the BRDF may be applied to other anisotropic or non-Gaussian rough surfaces. The measured BRDF may be used to model the apparent emissivity of silicon wafers to improve the temperature measurement accuracy in semiconductor manufacturing processes. The developed validity regime for the hybrid method can be beneficial to future research related to the modeling for thin-film coated rough surfaces.Ph.D.Committee Chair: Dr. Zhuomin Zhang; Committee Member: Dr. Andrei G. Fedorov; Committee Member: Dr. Andrew F. Peterson; Committee Member: Dr. Dennis W. Hess; Committee Member: Dr. J. Robert Maha
Computational Light Transport for Forward and Inverse Problems.
El transporte de luz computacional comprende todas las técnicas usadas para calcular el flujo de luz en una escena virtual. Su uso es ubicuo en distintas aplicaciones, desde entretenimiento y publicidad, hasta diseño de producto, ingenierÃa y arquitectura, incluyendo el generar datos validados para técnicas basadas en imagen por ordenador. Sin embargo, simular el transporte de luz de manera precisa es un proceso costoso. Como consecuencia, hay que establecer un balance entre la fidelidad de la simulación fÃsica y su coste computacional. Por ejemplo, es común asumir óptica geométrica o una velocidad de propagación de la luz infinita, o simplificar los modelos de reflectancia ignorando ciertos fenómenos. En esta tesis introducimos varias contribuciones a la simulación del transporte de luz, dirigidas tanto a mejorar la eficiencia del cálculo de la misma, como a expandir el rango de sus aplicaciones prácticas. Prestamos especial atención a remover la asunción de una velocidad de propagación infinita, generalizando el transporte de luz a su estado transitorio. Respecto a la mejora de eficiencia, presentamos un método para calcular el flujo de luz que incide directamente desde luminarias en un sistema de generación de imágenes por Monte Carlo, reduciendo significativamente la variancia de las imágenes resultantes usando el mismo tiempo de ejecución. Asimismo, introducimos una técnica basada en estimación de densidad en el estado transitorio, que permite reusar mejor las muestras temporales en un medio parcipativo. En el dominio de las aplicaciones, también introducimos dos nuevos usos del transporte de luz: Un modelo para simular un tipo especial de pigmentos gonicromáticos que exhiben apariencia perlescente, con el objetivo de proveer una forma de edición intuitiva para manufactura, y una técnica de imagen sin lÃnea de visión directa usando información del tiempo de vuelo de la luz, construida sobre un modelo de propagación de la luz basado en ondas.<br /
Classification of peacock feather reflectance using principal component analysis similarity factors from multispectral imaging data
This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.Iridescent structural colors in biology exhibit sophisticated spatially-varying reflectance properties that depend on both the illumination and viewing angles. The classification of such spectral and spatial information in iridescent structurally colored surfaces is important to elucidate the functional role of irregularity and to improve understanding of color pattern formation at different length scales. In this study, we propose a non-invasive method for the spectral classification of spatial reflectance patterns at the micron scale based on the multispectral imaging technique and the principal component analysis similarity factor (PCASF). We demonstrate the effectiveness of this approach and its component methods by detailing its use in the study of the angle-dependent reflectance properties of Pavo cristatus (the common peacock) feathers, a species of peafowl very well known to exhibit bright and saturated iridescent colors. We show that multispectral reflectance imaging and PCASF approaches can be used as effective tools for spectral recognition of iridescent patterns in the visible spectrum and provide meaningful information for spectral classification of the irregularity of the microstructure in iridescent plumage.This research was developed during a visiting research stay of Dr. José M. Medina in the Departamento de Óptica, Universidad de Granada, Spain. We thank to José Medina and RosalÃa Ruiz who provided the peacock samples, to David Porcel and Juan de Dios Bueno from the Servicio de MicroscopÃa, (Centro de Instrumentación CientÃfica, Universidad de Granada) for technical assessment, and to the Color Imaging Group (Universidad de Granada) for their hardware partial support. JMM and JAD acknowledge the Departmento de Óptica, Universidad de Granada, Spain. PV acknowledges USAF funding (FA9550-10-1-0020)
Collective Behavior of Interacting Magnetic Nanoparticles
In the past, Low Dimensional Materials by Design group at ORNL in collaboration with students from the University of Tennessee, have successfully tailored and studied magnetic nanostructures in 2D, 1D and 0D spatial confinement on Cu(111) substrates. They observed a striking collective ferromagnetic long-range order in Fe-nanodots on Cu(111) surface which can be stabilized through the indirect exchange interaction mediated by the substrate. This type of magnetic interaction was expected to have little effect on promoting a global ferromagnetic order in a randomly distributed dot assembly. It is for certain that we need a better understanding of the relative roles of magnetic anisotropy and magnetic interaction in the magnetism of reduced dimensionality, in general, and nanodot assemblies, in particular. With this accomplishment in mind, I’m trying to study the collective behavior of interacting magnetic nanoparticles.
We proposed the following experiments to quantify the relative roles of magnetic anisotropy, dipolar interaction and indirect exchange interaction on the collective magnetic behavior of nanodot assemblies. They consist of two main projects:
a) study how the indirect exchange interaction is affected by modifying the surface states. The interaction is observed through the changes in the critical temperature (Tc) of Fe dots as a function of miscut angle Cu curved substrate. Depending on the buffer layer (Xe) coverage, we observed a critical terrace width above which the Tc slightly increases and below which the Tc decreases rapidly. In other words, the (Tc) depends largely on the Cu(111) miscut angle.
b) study the role of magnetic anisotropy and dipolar interaction. We used Co dots on TiO2(110) substrate as a prototype. We observed significant perpendicular magnetic anisotropy with perpendicular easy axis for both large and small dot densities with no sign of hysteresis observed down to 2 K
Theoretical studies of the nucleation and growth of thin metal films: a focus on Ag deposited on Ag(100)
Theoretical studies of the nucleation and growth of metal films are performed, where the focus is the Molecular Beam Epitaxial (MBE) growth of Ag on the Ag(100) surface. Ag films grown under MBE, for the temperatures and atomic fluxes considered here (0→300K), are very far from equilibrium structures, due to the breaking of detailed balance during deposition. Included are studies of: metal film growth at very low temperatures; the temperature dependence of mound formation; the temperature dependence of kinetic roughening; the effect of the step-edge barrier on very thin films, and the post-deposition time dependence of nucleation. For these studies a range of lattice gas models are developed that are thought to contain the essential physics. These models contain such features as terrace diffusion, realistic edge diffusion process, a non-uniform Erlich-Schwoebel barrier, restricted and normal Downward Funneling, and low barrier diffusion process along micro-facets. The models were then tested by first performing a Kinetic Monte-Carlo simulation, and comparing the results to experimental data generated from previous Scanning Tunneling Microscopy studies. The models not only proved to be in good agreement with average quantities of the experimental film, but also proved to reproduce the details of the experimental morphologies quite well
Automated inverse-rendering techniques for realistic 3D artefact compositing in 2D photographs
PhD ThesisThe process of acquiring images of a scene and modifying the defining structural features
of the scene through the insertion of artefacts is known in literature as compositing. The
process can take effect in the 2D domain (where the artefact originates from a 2D image
and is inserted into a 2D image), or in the 3D domain (the artefact is defined as a dense
3D triangulated mesh, with textures describing its material properties).
Compositing originated as a solution to enhancing, repairing, and more broadly editing
photographs and video data alike in the film industry as part of the post-production stage.
This is generally thought of as carrying out operations in a 2D domain (a single image
with a known width, height, and colour data). The operations involved are sequential and
entail separating the foreground from the background (matting), or identifying features
from contour (feature matching and segmentation) with the purpose of introducing new
data in the original. Since then, compositing techniques have gained more traction in the
emerging fields of Mixed Reality (MR), Augmented Reality (AR), robotics and machine
vision (scene understanding, scene reconstruction, autonomous navigation). When focusing
on the 3D domain, compositing can be translated into a pipeline 1 - the incipient stage
acquires the scene data, which then undergoes a number of processing steps aimed at
inferring structural properties that ultimately allow for the placement of 3D artefacts
anywhere within the scene, rendering a plausible and consistent result with regard to the
physical properties of the initial input.
This generic approach becomes challenging in the absence of user annotation and
labelling of scene geometry, light sources and their respective magnitude and orientation,
as well as a clear object segmentation and knowledge of surface properties. A single image,
a stereo pair, or even a short image stream may not hold enough information regarding
the shape or illumination of the scene, however, increasing the input data will only incur
an extensive time penalty which is an established challenge in the field.
Recent state-of-the-art methods address the difficulty of inference in the absence of
1In the present document, the term pipeline refers to a software solution formed of stand-alone modules
or stages. It implies that the flow of execution runs in a single direction, and that each module has the
potential to be used on its own as part of other solutions. Moreover, each module is assumed to take an
input set and output data for the following stage, where each module addresses a single type of problem
only.
data, nonetheless, they do not attempt to solve the challenge of compositing artefacts
between existing scene geometry, or cater for the inclusion of new geometry behind complex
surface materials such as translucent glass or in front of reflective surfaces.
The present work focuses on the compositing in the 3D domain and brings forth a
software framework 2 that contributes solutions to a number of challenges encountered in
the field, including the ability to render physically-accurate soft shadows in the absence
of user annotate scene properties or RGB-D data. Another contribution consists in the
timely manner in which the framework achieves a believable result compared to the other
compositing methods which rely on offline rendering. The availability of proprietary
hardware and user expertise are two of the main factors that are not required in order to
achieve a fast and reliable results within the current framework