295 research outputs found

    The Impact of Surface Normals on Appearance

    Get PDF
    The appearance of an object is the result of complex light interaction with the object. Beyond the basic interplay between incident light and the object\u27s material, a multitude of physical events occur between this illumination and the microgeometry at the point of incidence, and also beneath the surface. A given object, made as smooth and opaque as possible, will have a completely different appearance if either one of these attributes - amount of surface mesostructure (small-scale surface orientation) or translucency - is altered. Indeed, while they are not always readily perceptible, the small-scale features of an object are as important to its appearance as its material properties. Moreover, surface mesostructure and translucency are inextricably linked in an overall effect on appearance. In this dissertation, we present several studies examining the importance of surface mesostructure (small-scale surface orientation) and translucency on an object\u27s appearance. First, we present an empirical study that establishes how poorly a mesostructure estimation technique can perform when translucent objects are used as input. We investigate the two major factors in determining an object\u27s translucency: mean free path and scattering albedo. We exhaustively vary the settings of these parameters within realistic bounds, examining the subsequent blurring effect on the output of a common shape estimation technique, photometric stereo. Based on our findings, we identify a dramatic effect that the input of a translucent material has on the quality of the resultant estimated mesostructure. In the next project, we discuss an optimization technique for both refining estimated surface orientation of translucent objects and determining the reflectance characteristics of the underlying material. For a globally planar object, we use simulation and real measurements to show that the blurring effect on normals that was observed in the previous study can be recovered. The key to this is the observation that the normalization factor for recovered normals is proportional to the error on the accuracy of the blur kernel created from estimated translucency parameters. Finally, we frame the study of the impact of surface normals in a practical, image-based context. We discuss our low-overhead, editing tool for natural images that enables the user to edit surface mesostructure while the system automatically updates the appearance in the natural image. Because a single photograph captures an instant of the incredibly complex interaction of light and an object, there is a wealth of information to extract from a photograph. Given a photograph of an object in natural lighting, we allow mesostructure edits and infer any missing reflectance information in a realistically plausible way

    Illumination Processing in Face Recognition

    Get PDF

    Measuring and simulating haemodynamics due to geometric changes in facial expression

    Get PDF
    The human brain has evolved to be very adept at recognising imperfections in human skin. In particular, observing someone’s facial skin appearance is important in recognising when someone is ill, or when finding a suitable mate. It is therefore a key goal of computer graphics research to produce highly realistic renderings of skin. However, the optical processes that give rise to skin appearance are complex and subtle. To address this, computer graphics research has incorporated more and more sophisticated models of skin reflectance. These models are generally based on static concentrations of skin chromophores; melanin and haemoglobin. However, haemoglobin concentrations are far from static, as blood flow is directly caused by both changes in facial expression and emotional state. In this thesis, we explore how blood flow changes as a consequence of changing facial expression with the aim of producing more accurate models of skin appearance. To build an accurate model of blood flow, we base it on real-world measurements of blood concentrations over time. We describe, in detail, the steps required to obtain blood concentrations from photographs of a subject. These steps are then used to measure blood concentration maps for a series of expressions that define a wide gamut of human expression. From this, we define a blending algorithm that allows us to interpolate these maps to generate concentrations for other expressions. This technique, however, requires specialist equipment to capture the maps in the first place. We try to rectify this problem by investigating a direct link between changes in facial geometry and haemoglobin concentrations. This requires building a unique capture device that captures both simultaneously. Our analysis hints a direct linear connection between the two, paving the way for further investigatio

    Inferring surface shape from specular reflections

    Get PDF

    Programmable Image-Based Light Capture for Previsualization

    Get PDF
    Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his minds-eye . Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation\u27s focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker\u27s artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform. This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori

    On Practical Sampling of Bidirectional Reflectance

    Get PDF

    The delta radiance field

    Get PDF
    The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now

    3D inspection methods for specular or partially specular surfaces

    Get PDF
    Deflectometric techniques are a powerful tool for the automated quality control of specular or shiny surfaces. These techniques are based on using a camera to observe a reference pattern reflected on the surface under inspection, exploiting the dependence of specular reflections on surface normals to recover shape information from the acquired images. Although deflectometry is already used in industrial environments such as the quality control of lenses or car bodies, there are still some open problems. On the one hand, using quantitative deflectometry, the normal vector field and the 3D shape of a surface can be obtained, but these techniques do not yet take full advantage of their local sensitivity because the achieved global accuracies are affected by calibration errors. On the other hand, qualitative deflectometry is used to detect surface imperfections without absolute measurements, exploiting the local sensitivity of deflectometric recordings with reduced calibration requirements. However, this qualitative approach requires further processing that can involve a considerable engineering effort, particularly for aesthetic defects which are inherently subjective. The first part of this thesis aims to contribute to a better understanding of how deflectometric setups and their calibration errors affect quantitative measurements. Different error sources are considered including the camera calibration uncertainty and several non-ideal characteristics of LCD screens used to generate the light patterns. Experiments performed using real measurements and simulations show that the non-planarity of the LCD screen and the camera calibration are the dominant sources of error. The second part of the thesis investigates the use of deep learning to identify geometrical imperfections and texture defects based on deflectometric data. Two different approaches are explored to extract and combine photometric and geometric information using convolutional neural network architectures: one for automated classification of defective samples, and another one for automated segmentation of defective regions in a sample. The experimental results in a real industrial case study indicate that both architectures are able to learn relevant features from deflectometric data, enabling the classification and segmentation of defects based on a dataset of user-provided examples.Teknika deflektometrikoak tresna baliotsuak dira gainazal espekular edo distiratsuen kalitate kontrol automatikoa gauzatzeko. Teknika hauetan, kamera bat erabiltzen da ikuskatu beharreko gainazalean islatutako erreferentziazko patroi bat behatzeko, eta isladapen espekularrek gainazalen bektore normalengan duten menpekotasuna ustiatzen dute irudietatik informazio geometrikoa berreskuratzeko. Zenbait industria-aplikaziotan deflektometria jada erabiltzen bada ere –adibidez, betaurrekoen edo autoen karrozerien kalitate kontrolean-, oraindik badaude hobetu beharreko hainbat esparru. Batetik, deflektometria kuantitatiboak aukera ematen du gainazal baten bektore-eremu normala eta 3D forma lortzeko, baina gaur egun teknika hauek ez dute beren sentsibilitate lokal guztia aprobetxatzen kalibrazio-akatsek zehaztasun globalean duten eraginagatik. Bestetik, deflektometria kualitatiboa neurketa absoluturik egin gabe gainazal akatsak antzemateko erabili daiteke, kalibrazio-eskakizun murriztuekin sentsibilitate lokala ustiatuz. Hala ere, teknika horiek algoritmoen garapenean esfortzu handia ekar dezakeen prozesamendu bat eskatzen dute, bereziki bere baitan subjektiboak diren akats estetikoetarako. Hala ere, teknika horiek algoritmoen garapenean esfortzu handia ekar dezakeen prozesamendu bat eskatzen dute, bereziki bere baitan subjektiboak diren akats estetikoetarako. Tesi honen lehen zatiaren helburua adkizizio sistema osatzen duten gailuek eta horien kalibrazioek neurketa kuantitatiboei nola eragiten dieten hobeto ulertzen laguntzea da. Hainbat errore-iturri hartzen dira kontuan, besteak beste kameraren kalibrazioaren ziurgabetasuna, eta argi-patroiak sortzeko erabilitako LCD pantailen zenbait ezaugarri ez-ideal. Neurketa errealetan eta simulazioetan egindako esperimentuek erakusten dute LCD pantailaren deformazioak eta kameraren kalibrazioak eragindako erroreak direla neurketen akats eta ziurgabetasun iturri nagusiak. Tesiaren bigarren zatian, datu deflektometrikoetatik abiatuz, inperfekzio geometrikoak eta testura-akatsak identifikatzeko ikaskuntza sakoneko metodoen erabilera ikertzen da. Helburu honekin, irudietatik informazio fotometrikoa eta geometrikoa atera eta konbinatzen duten sare neuronal konboluzionaletan oinarritutako bi arkitektura proposatzen dira: bata, lagin akastunak automatikoki sailkatzeko; eta, bestea, laginetako eremu akastunak automatikoki segmentatzeko. Automobilgintza industriako kasu praktiko baten lortutako emaitzek erakusten dute erabilitako arkitekturek datu deflektometrikoetatik ezaugarri esanguratsuak ikas ditzaketela, erabiltzaileak emandako adibide multzo batean oinarrituta gainazal akatsak sailkatu eta segmentatzea ahalbidetuz.Las técnicas deflectométricas son una herramienta valiosa para automatizar el control de calidad de superficies especulares o reflectantes. Estas técnicas se basan en el uso de una cámara para observar un patrón de referencia reflejado en la superficie bajo inspección, explotando la dependencia de los reflejos especulares en la normal de la superficie para recuperar información geométrica a partir de las imágenes adquiridas. Aunque la deflectometría ya se usa en algunas aplicaciones industriales, tales como el control de calidad de lentes o carrocerías de coches, todavía hay algunos problemas abiertos. Por un lado, la deflectometría cuantitativa permite obtener el campo vectorial normal y la forma 3D de una superficie, pero a día de hoy no es capaz de aprovechar al máximo su sensibilidad local ya que la precisión global se ve afectada por errores de calibración. Por otro lado, la deflectometría cualitativa se utiliza para detectar imperfecciones de la superficie sin mediciones absolutas, explotando la sensibilidad local de la deflectometría con requisitos de calibración reducidos. Sin embargo, estos métodos requieren un procesamiento adicional que puede implicar un esfuerzo considerable en el desarrollo de algoritmos, particularmente para defectos estéticos que son inherentemente subjetivos. La primera parte de esta tesis tiene como objetivo contribuir a una mejor comprensión de cómo el sistema de adquisición y su calibración afectan a las mediciones cuantitativas. Se consideran dife-rentes fuentes de error, incluida la incertidumbre de calibración de la cámara y varias características no ideales de las pantallas LCD utilizadas para generar los patrones de luz. Los experimentos realizados con mediciones reales y simulaciones indican que los errores inducidos por la deformación de la pantalla LCD y la calibración de la cámara son las principales fuentes de error e incertidumbre. La segunda parte de la tesis investiga el uso del aprendizaje profundo para identificar imperfecciones geométricas y defectos de textura a partir de datos deflectométricos. Se adoptan dos enfoques diferentes para extraer y combinar información fotométrica y geométrica utilizando sendas arquitecturas basadas en redes neuronales convolucionales: una para la clasificación automatizada de muestras defectuosas y otra para la segmentación automatizada de regiones defectuosas en una muestra. Los resultados experimentales en un caso de estudio industrial real indican que ambas arquitecturas pueden aprender características relevantes de los datos deflectométricos, permitiendo la clasificación y segmentación de defectos en base a un conjunto de datos de ejemplos proporcionados por el usuario

    Enhancing Mesh Deformation Realism: Dynamic Mesostructure Detailing and Procedural Microstructure Synthesis

    Get PDF
    Propomos uma solução para gerar dados de mapas de relevo dinâmicos para simular deformações em superfícies macias, com foco na pele humana. A solução incorpora a simulação de rugas ao nível mesoestrutural e utiliza texturas procedurais para adicionar detalhes de microestrutura estáticos. Oferece flexibilidade além da pele humana, permitindo a geração de padrões que imitam deformações em outros materiais macios, como couro, durante a animação. As soluções existentes para simular rugas e pistas de deformação frequentemente dependem de hardware especializado, que é dispendioso e de difícil acesso. Além disso, depender exclusivamente de dados capturados limita a direção artística e dificulta a adaptação a mudanças. Em contraste, a solução proposta permite a síntese dinâmica de texturas que se adaptam às deformações subjacentes da malha de forma fisicamente plausível. Vários métodos foram explorados para sintetizar rugas diretamente na geometria, mas sofrem de limitações como auto-interseções e maiores requisitos de armazenamento. A intervenção manual de artistas na criação de mapas de rugas e mapas de tensão permite controle, mas pode ser limitada em deformações complexas ou onde maior realismo seja necessário. O nosso trabalho destaca o potencial dos métodos procedimentais para aprimorar a geração de padrões de deformação dinâmica, incluindo rugas, com maior controle criativo e sem depender de dados capturados. A incorporação de padrões procedimentais estáticos melhora o realismo, e a abordagem pode ser estendida além da pele para outros materiais macios.We propose a solution for generating dynamic heightmap data to simulate deformations for soft surfaces, with a focus on human skin. The solution incorporates mesostructure-level wrinkles and utilizes procedural textures to add static microstructure details. It offers flexibility beyond human skin, enabling the generation of patterns mimicking deformations in other soft materials, such as leater, during animation. Existing solutions for simulating wrinkles and deformation cues often rely on specialized hardware, which is costly and not easily accessible. Moreover, relying solely on captured data limits artistic direction and hinders adaptability to changes. In contrast, our proposed solution provides dynamic texture synthesis that adapts to underlying mesh deformations. Various methods have been explored to synthesize wrinkles directly to the geometry, but they suffer from limitations such as self-intersections and increased storage requirements. Manual intervention by artists using wrinkle maps and tension maps provides control but may be limited to the physics-based simulations. Our research presents the potential of procedural methods to enhance the generation of dynamic deformation patterns, including wrinkles, with greater creative control and without reliance on captured data. Incorporating static procedural patterns improves realism, and the approach can be extended to other soft-materials beyond skin
    corecore