27 research outputs found

    Surface Appearance Estimation from Video Sequences

    Get PDF
    The realistic virtual reproduction of real world objects using Computer Graphics techniques requires the accurate acquisition and reconstruction of both 3D geometry and surface appearance. Unfortunately, in several application contexts, such as Cultural Heritage (CH), the reflectance acquisition can be very challenging due to the type of object to acquire and the digitization conditions. Although several methods have been proposed for the acquisition of object reflectance, some intrinsic limitations still make its acquisition a complex task for CH artworks: the use of specialized instruments (dome, special setup for camera and light source, etc.); the need of highly controlled acquisition environments, such as a dark room; the difficulty to extend to objects of arbitrary shape and size; the high level of expertise required to assess the quality of the acquisition. The Ph.D. thesis proposes novel solutions for the acquisition and the estimation of the surface appearance in fixed and uncontrolled lighting conditions with several degree of approximations (from a perceived near diffuse color to a SVBRDF), taking advantage of the main features that differentiate a video sequences from an unordered photos collections: the temporal coherence; the data redundancy; the easy of the acquisition, which allows acquisition of many views of the object in a short time. Finally, Reflectance Transformation Imaging (RTI) is an example of widely used technology for the acquisition of the surface appearance in the CH field, even if limited to single view Reflectance Fields of nearly flat objects. In this context, the thesis addresses also two important issues in RTI usage: how to provide better and more flexible virtual inspection capabilities with a set of operators that improve the perception of details, features and overall shape of the artwork; how to increase the possibility to disseminate this data and to support remote visual inspection of both scholar and ordinary public

    Acquisition, Modeling, and Augmentation of Reflectance for Synthetic Optical Flow Reference Data

    Get PDF
    This thesis is concerned with the acquisition, modeling, and augmentation of material reflectance to simulate high-fidelity synthetic data for computer vision tasks. The topic is covered in three chapters: I commence with exploring the upper limits of reflectance acquisition. I analyze state-of-the-art BTF reflectance field renderings and show that they can be applied to optical flow performance analysis with closely matching performance to real-world images. Next, I present two methods for fitting efficient BRDF reflectance models to measured BTF data. Both methods combined retain all relevant reflectance information as well as the surface normal details on a pixel level. I further show that the resulting synthesized images are suited for optical flow performance analysis, with a virtually identical performance for all material types. Finally, I present a novel method for augmenting real-world datasets with physically plausible precipitation effects, including ground surface wetting, water droplets on the windshield, and water spray and mists. This is achieved by projecting the realworld image data onto a reconstructed virtual scene, manipulating the scene and the surface reflectance, and performing unbiased light transport simulation of the precipitation effects

    On-site example-based material appearance acquisition

    Get PDF
    We present a novel example-based material appearance modeling method suitable for rapid digital content creation. Our method only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known natural illumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our method does not, nor does it recover the shape explicitly, greatly simplifying on-site appearance acquisition to a lightweight photography process suited for non-expert users. As our central contribution, we propose a shape-agnostic BRDF estimation procedure based on binary RGB profile matching. We also model the appearance of materials exhibiting a regular or stationary texture-like appearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplar with (roughly) similar features. We believe our lightweight method for on-site shape-agnostic appearance acquisition presents a suitable alternative for a variety of applications that require plausible “rapid-appearance-modeling

    BxDF material acquisition, representation, and rendering for VR and design

    Get PDF
    Photorealistic and physically-based rendering of real-world environments with high fidelity materials is important to a range of applications, including special effects, architectural modelling, cultural heritage, computer games, automotive design, and virtual reality (VR). Our perception of the world depends on lighting and surface material characteristics, which determine how the light is reflected, scattered, and absorbed. In order to reproduce appearance, we must therefore understand all the ways objects interact with light, and the acquisition and representation of materials has thus been an important part of computer graphics from early days. Nevertheless, no material model nor acquisition setup is without limitations in terms of the variety of materials represented, and different approaches vary widely in terms of compatibility and ease of use. In this course, we describe the state of the art in material appearance acquisition and modelling, ranging from mathematical BSDFs to data-driven capture and representation of anisotropic materials, and volumetric/thread models for patterned fabrics. We further address the problem of material appearance constancy across different rendering platforms. We present two case studies in architectural and interior design. The first study demonstrates Yulio, a new platform for the creation, delivery, and visualization of acquired material models and reverse engineered cloth models in immersive VR experiences. The second study shows an end-to-end process of capture and data-driven BSDF representation using the physically-based Radiance system for lighting simulation and rendering

    Data-Driven Reflectance Estimation Under Natural Lighting

    Get PDF
    Bidirectional Reflectance Distribution Functions, (BRDFs), describe how light is reflected off of a material. BRDFs are captured so that the materials can be re-lit under new while maintaining accuracy. BRDF models can approximate the reflectance of a material, but are unable to accurately represent the full BRDF of the material. Acquisition setups for BRDFs trade accuracy for speed with the most accurate methods, gonioreflectometers, being the slowest. Image-based BRDF acquisition approaches range from using complicated controlled lighting setups to uncontrolled known lighting to assuming the lighting is unknown. We propose a data-driven method for recovering BRDFs under known, but uncontrolled lighting. This approach utilizes a dataset of 100 measured BRDFs to accurately reconstruct the BRDF from a single photograph. We model the BRDFs as Gaussian Mixture Models, (GMMs), and use an Expectation Maximization, (EM), approach to determine cluster membership. We apply this approach to captured data as well as synthetic. We continue this work by relaxing assumptions about either lighting, material, or geometry. This work was supported in part by NSF grant IIS-1350323 and gifts from Google, Activision, and Nvidia

    Neural Reflectance Decomposition

    Get PDF
    Die Erstellung von fotorealistischen Modellen von Objekten aus Bildern oder Bildersammlungen ist eine grundlegende Herausforderung in der Computer Vision und Grafik. Dieses Problem wird auch als inverses Rendering bezeichnet. Eine der grĂ¶ĂŸten Herausforderungen bei dieser Aufgabe ist die vielfĂ€ltige AmbiguitĂ€t. Der Prozess Bilder aus 3D-Objekten zu erzeugen wird Rendering genannt. Allerdings beeinflussen sich mehrere Eigenschaften wie Form, Beleuchtung und die ReflektivitĂ€t der OberflĂ€che gegenseitig. ZusĂ€tzlich wird eine Integration dieser EinflĂŒsse durchgefĂŒhrt, um das endgĂŒltige Bild zu erzeugen. Die Umkehrung dieser integrierten AbhĂ€ngigkeiten ist eine Ă€ußerst schwierige und mehrdeutige Aufgabenstellung. Die Lösung dieser Aufgabe ist jedoch von entscheidender Bedeutung, da die automatisierte Erstellung solcher wieder beleuchtbaren Objekte verschiedene Anwendungen in den Bereichen Online-Shopping, Augmented Reality (AR), Virtual Reality (VR), Spiele oder Filme hat. In dieser Arbeit werden zwei AnsĂ€tze zur Lösung dieser Aufgabe beschrieben. Erstens wird eine Netzwerkarchitektur vorgestellt, die die Erfassung eines Objekts und dessen Materialien von zwei Aufnahmen ermöglicht. Der Grad der Blicksynthese von diesen Objekten ist jedoch begrenzt, da bei der Dekomposition nur eine einzige Perspektive verwendet wird. Daher wird eine zweite Reihe von AnsĂ€tzen vorgeschlagen, bei denen eine Sammlung von 360 Grad verteilten Bildern in die Form, Reflektanz und Beleuchtung gespalten werden. Diese Multi-View-Bilder werden pro Objekt optimiert. Das resultierende Objekt kann direkt in handelsĂŒblicher Rendering-Software oder in Spielen verwendet werden. Wir erreichen dies, indem wir die aktuelle Forschung zu neuronalen Feldern erweitern Reflektanz zu speichern. Durch den Einsatz von Volumen-Rendering-Techniken können wir ein Reflektanzfeld aus natĂŒrlichen Bildsammlungen ohne jegliche Ground Truth (GT) Überwachung optimieren. Die von uns vorgeschlagenen Methoden erreichen eine erstklassige QualitĂ€t der Dekomposition und ermöglichen neuartige Aufnahmesituationen, in denen sich Objekte unter verschiedenen Beleuchtungsbedingungen oder an verschiedenen Orten befinden können, was ĂŒblich fĂŒr Online-Bildsammlungen ist.Creating relightable objects from images or collections is a fundamental challenge in computer vision and graphics. This problem is also known as inverse rendering. One of the main challenges in this task is the high ambiguity. The creation of images from 3D objects is well defined as rendering. However, multiple properties such as shape, illumination, and surface reflectiveness influence each other. Additionally, an integration of these influences is performed to form the final image. Reversing these integrated dependencies is highly ill-posed and ambiguous. However, solving the task is essential, as automated creation of relightable objects has various applications in online shopping, augmented reality (AR), virtual reality (VR), games, or movies. In this thesis, we propose two approaches to solve this task. First, a network architecture is discussed, which generalizes the decomposition of a two-shot capture of an object from large training datasets. The degree of novel view synthesis is limited as only a singular perspective is used in the decomposition. Therefore, the second set of approaches is proposed, which decomposes a set of 360-degree images. These multi-view images are optimized per object, and the result can be directly used in standard rendering software or games. We achieve this by extending recent research on Neural Fields, which can store information in a 3D neural volume. Leveraging volume rendering techniques, we can optimize a reflectance field from in-the-wild image collections without any ground truth (GT) supervision. Our proposed methods achieve state-of-the-art decomposition quality and enable novel capture setups where objects can be under varying illumination or in different locations, which is typical for online image collections

    Advanced methods for relightable scene representations in image space

    Get PDF
    The realistic reproduction of visual appearance of real-world objects requires accurate computer graphics models that describe the optical interaction of a scene with its surroundings. Data-driven approaches that model the scene globally as a reflectance field function in eight parameters deliver high quality and work for most material combinations, but are costly to acquire and store. Image-space relighting, which constrains the application to create photos with a virtual, fix camera in freely chosen illumination, requires only a 4D data structure to provide full fidelity. This thesis contributes to image-space relighting on four accounts: (1) We investigate the acquisition of 4D reflectance fields in the context of sampling and propose a practical setup for pre-filtering of reflectance data during recording, and apply it in an adaptive sampling scheme. (2) We introduce a feature-driven image synthesis algorithm for the interpolation of coarsely sampled reflectance data in software to achieve highly realistic images. (3) We propose an implicit reflectance data representation, which uses a Bayesian approach to relight complex scenes from the example of much simpler reference objects. (4) Finally, we construct novel, passive devices out of optical components that render reflectance field data in real-time, shaping the incident illumination into the desired imageDie realistische Wiedergabe der visuellen Erscheinung einer realen Szene setzt genaue Modelle aus der Computergraphik fĂŒr die Interaktion der Szene mit ihrer Umgebung voraus. Globale AnsĂ€tze, die das Verhalten der Szene insgesamt als Reflektanzfeldfunktion in acht Parametern modellieren, liefern hohe QualitĂ€t fĂŒr viele Materialtypen, sind aber teuer aufzuzeichnen und zu speichern. Verfahren zur Neubeleuchtung im Bildraum schrĂ€nken die Anwendbarkeit auf fest gewĂ€hlte Kameras ein, ermöglichen aber die freie Wahl der Beleuchtung, und erfordern dadurch lediglich eine 4D - Datenstruktur fĂŒr volle Wiedergabetreue. Diese Arbeit enthĂ€lt vier BeitrĂ€ge zu diesem Thema: (1) wir untersuchen die Aufzeichnung von 4D Reflektanzfeldern im Kontext der Abtasttheorie und schlagen einen praktischen Aufbau vor, der Reflektanzdaten bereits wĂ€hrend der Messung vorfiltert. Wir verwenden ihn in einem adaptiven Abtastschema. (2) Wir fĂŒhren einen merkmalgesteuerten Bildsynthesealgorithmus fĂŒr die Interpolation von grob abgetasteten Reflektanzdaten ein. (3) Wir schlagen eine implizite Beschreibung von Reflektanzdaten vor, die mit einem Bayesschen Ansatz komplexe Szenen anhand des Beispiels eines viel einfacheren Referenzobjektes neu beleuchtet. (4) Unter der Verwendung optischer Komponenten schaffen wir passive Aufbauten zur Darstellung von Reflektanzfeldern in Echtzeit, indem wir einfallende Beleuchtung direkt in das gewĂŒnschte Bild umwandeln

    Modelling and Visualisation of the Optical Properties of Cloth

    Get PDF
    Cloth and garment visualisations are widely used in fashion and interior design, entertaining, automotive and nautical industry and are indispensable elements of visual communication. Modern appearance models attempt to offer a complete solution for the visualisation of complex cloth properties. In the review part of the chapter, advanced methods that enable visualisation at micron resolution, methods used in three-dimensional (3D) visualisation workflow and methods used for research purposes are presented. Within the review, those methods offering a comprehensive approach and experiments on explicit clothes attributes that present specific optical phenomenon are analysed. The review of appearance models includes surface and image-based models, volumetric and explicit models. Each group is presented with the representative authors’ research group and the application and limitations of the methods. In the final part of the chapter, the visualisation of cloth specularity and porosity with an uneven surface is studied. The study and visualisation was performed using image data obtained with photography. The acquisition of structure information on a large scale namely enables the recording of structure irregularities that are very common on historical textiles, laces and also on artistic and experimental pieces of cloth. The contribution ends with the presentation of cloth visualised with the use of specular and alpha maps, which is the result of the image processing workflow
    corecore